00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1070 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3732 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.187 Fetching changes from the remote Git repository 00:00:00.189 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.234 Using shallow fetch with depth 1 00:00:00.234 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.234 > git --version # timeout=10 00:00:00.267 > git --version # 'git version 2.39.2' 00:00:00.267 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.288 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.724 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.734 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.746 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.746 > git config core.sparsecheckout # timeout=10 00:00:06.756 > git read-tree -mu HEAD # timeout=10 00:00:06.771 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.787 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.787 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.912 [Pipeline] Start of Pipeline 00:00:06.927 [Pipeline] library 00:00:06.929 Loading library shm_lib@master 00:00:06.929 Library shm_lib@master is cached. Copying from home. 00:00:06.944 [Pipeline] node 00:00:06.957 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.959 [Pipeline] { 00:00:06.965 [Pipeline] catchError 00:00:06.966 [Pipeline] { 00:00:06.975 [Pipeline] wrap 00:00:06.981 [Pipeline] { 00:00:06.985 [Pipeline] stage 00:00:06.987 [Pipeline] { (Prologue) 00:00:07.000 [Pipeline] echo 00:00:07.001 Node: VM-host-SM38 00:00:07.005 [Pipeline] cleanWs 00:00:07.014 [WS-CLEANUP] Deleting project workspace... 00:00:07.014 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.020 [WS-CLEANUP] done 00:00:07.201 [Pipeline] setCustomBuildProperty 00:00:07.269 [Pipeline] httpRequest 00:00:07.632 [Pipeline] echo 00:00:07.633 Sorcerer 10.211.164.20 is alive 00:00:07.640 [Pipeline] retry 00:00:07.641 [Pipeline] { 00:00:07.650 [Pipeline] httpRequest 00:00:07.655 HttpMethod: GET 00:00:07.656 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.656 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.675 Response Code: HTTP/1.1 200 OK 00:00:07.675 Success: Status code 200 is in the accepted range: 200,404 00:00:07.676 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.846 [Pipeline] } 00:00:33.862 [Pipeline] // retry 00:00:33.869 [Pipeline] sh 00:00:34.159 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.175 [Pipeline] httpRequest 00:00:34.548 [Pipeline] echo 00:00:34.550 Sorcerer 10.211.164.20 is alive 00:00:34.559 [Pipeline] retry 00:00:34.561 [Pipeline] { 00:00:34.575 [Pipeline] httpRequest 00:00:34.581 HttpMethod: GET 00:00:34.581 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.582 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.588 Response Code: HTTP/1.1 200 OK 00:00:34.588 Success: Status code 200 is in the accepted range: 200,404 00:00:34.589 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:48.456 [Pipeline] } 00:01:48.473 [Pipeline] // retry 00:01:48.481 [Pipeline] sh 00:01:48.766 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:52.079 [Pipeline] sh 00:01:52.362 + git -C spdk log --oneline -n5 00:01:52.362 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:52.362 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:52.362 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:52.362 66289a6db build: use VERSION file for storing version 00:01:52.362 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:52.382 [Pipeline] withCredentials 00:01:52.393 > git --version # timeout=10 00:01:52.405 > git --version # 'git version 2.39.2' 00:01:52.425 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:52.426 [Pipeline] { 00:01:52.436 [Pipeline] retry 00:01:52.438 [Pipeline] { 00:01:52.451 [Pipeline] sh 00:01:52.737 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:53.694 [Pipeline] } 00:01:53.711 [Pipeline] // retry 00:01:53.716 [Pipeline] } 00:01:53.734 [Pipeline] // withCredentials 00:01:53.743 [Pipeline] httpRequest 00:01:54.314 [Pipeline] echo 00:01:54.316 Sorcerer 10.211.164.20 is alive 00:01:54.326 [Pipeline] retry 00:01:54.328 [Pipeline] { 00:01:54.341 [Pipeline] httpRequest 00:01:54.347 HttpMethod: GET 00:01:54.348 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.348 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.357 Response Code: HTTP/1.1 200 OK 00:01:54.357 Success: Status code 200 is in the accepted range: 200,404 00:01:54.358 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:07.497 [Pipeline] } 00:02:07.515 [Pipeline] // retry 00:02:07.523 [Pipeline] sh 00:02:07.808 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:09.783 [Pipeline] sh 00:02:10.068 + git -C dpdk log --oneline -n5 00:02:10.068 eeb0605f11 version: 23.11.0 00:02:10.068 238778122a doc: update release notes for 23.11 00:02:10.068 46aa6b3cfc doc: fix description of RSS features 00:02:10.068 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:10.068 7e421ae345 devtools: support skipping forbid rule check 00:02:10.087 [Pipeline] writeFile 00:02:10.101 [Pipeline] sh 00:02:10.387 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:10.400 [Pipeline] sh 00:02:10.685 + cat autorun-spdk.conf 00:02:10.685 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.685 SPDK_TEST_NVME=1 00:02:10.685 SPDK_TEST_FTL=1 00:02:10.685 SPDK_TEST_ISAL=1 00:02:10.685 SPDK_RUN_ASAN=1 00:02:10.685 SPDK_RUN_UBSAN=1 00:02:10.685 SPDK_TEST_XNVME=1 00:02:10.685 SPDK_TEST_NVME_FDP=1 00:02:10.685 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.685 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.685 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.694 RUN_NIGHTLY=1 00:02:10.696 [Pipeline] } 00:02:10.709 [Pipeline] // stage 00:02:10.721 [Pipeline] stage 00:02:10.723 [Pipeline] { (Run VM) 00:02:10.735 [Pipeline] sh 00:02:11.020 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:11.020 + echo 'Start stage prepare_nvme.sh' 00:02:11.020 Start stage prepare_nvme.sh 00:02:11.020 + [[ -n 1 ]] 00:02:11.020 + disk_prefix=ex1 00:02:11.020 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:11.020 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:11.020 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:11.020 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.020 ++ SPDK_TEST_NVME=1 00:02:11.020 ++ SPDK_TEST_FTL=1 00:02:11.020 ++ SPDK_TEST_ISAL=1 00:02:11.020 ++ SPDK_RUN_ASAN=1 00:02:11.020 ++ SPDK_RUN_UBSAN=1 00:02:11.020 ++ SPDK_TEST_XNVME=1 00:02:11.020 ++ SPDK_TEST_NVME_FDP=1 00:02:11.020 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.020 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.020 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.020 ++ RUN_NIGHTLY=1 00:02:11.020 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:11.020 + nvme_files=() 00:02:11.020 + declare -A nvme_files 00:02:11.020 + backend_dir=/var/lib/libvirt/images/backends 00:02:11.020 + nvme_files['nvme.img']=5G 00:02:11.020 + nvme_files['nvme-cmb.img']=5G 00:02:11.020 + nvme_files['nvme-multi0.img']=4G 00:02:11.020 + nvme_files['nvme-multi1.img']=4G 00:02:11.020 + nvme_files['nvme-multi2.img']=4G 00:02:11.020 + nvme_files['nvme-openstack.img']=8G 00:02:11.020 + nvme_files['nvme-zns.img']=5G 00:02:11.020 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:11.020 + (( SPDK_TEST_FTL == 1 )) 00:02:11.020 + nvme_files["nvme-ftl.img"]=6G 00:02:11.020 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:11.020 + nvme_files["nvme-fdp.img"]=1G 00:02:11.020 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:11.020 + for nvme in "${!nvme_files[@]}" 00:02:11.020 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:11.280 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.280 + for nvme in "${!nvme_files[@]}" 00:02:11.280 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:12.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:12.223 + for nvme in "${!nvme_files[@]}" 00:02:12.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:12.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:12.223 + for nvme in "${!nvme_files[@]}" 00:02:12.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:12.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:12.223 + for nvme in "${!nvme_files[@]}" 00:02:12.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:12.223 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:12.223 + for nvme in "${!nvme_files[@]}" 00:02:12.223 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:12.796 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:12.796 + for nvme in "${!nvme_files[@]}" 00:02:12.796 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:13.371 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:13.371 + for nvme in "${!nvme_files[@]}" 00:02:13.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:13.371 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:13.371 + for nvme in "${!nvme_files[@]}" 00:02:13.371 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:14.315 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:14.315 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:14.315 + echo 'End stage prepare_nvme.sh' 00:02:14.315 End stage prepare_nvme.sh 00:02:14.328 [Pipeline] sh 00:02:14.614 + DISTRO=fedora39 00:02:14.614 + CPUS=10 00:02:14.614 + RAM=12288 00:02:14.614 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:14.614 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:14.614 00:02:14.614 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:14.614 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:14.614 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:14.614 HELP=0 00:02:14.614 DRY_RUN=0 00:02:14.614 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:14.614 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:14.614 NVME_AUTO_CREATE=0 00:02:14.614 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:14.614 NVME_CMB=,,,, 00:02:14.614 NVME_PMR=,,,, 00:02:14.614 NVME_ZNS=,,,, 00:02:14.614 NVME_MS=true,,,, 00:02:14.614 NVME_FDP=,,,on, 00:02:14.614 SPDK_VAGRANT_DISTRO=fedora39 00:02:14.614 SPDK_VAGRANT_VMCPU=10 00:02:14.614 SPDK_VAGRANT_VMRAM=12288 00:02:14.614 SPDK_VAGRANT_PROVIDER=libvirt 00:02:14.614 SPDK_VAGRANT_HTTP_PROXY= 00:02:14.614 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:14.614 SPDK_OPENSTACK_NETWORK=0 00:02:14.614 VAGRANT_PACKAGE_BOX=0 00:02:14.614 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:14.614 FORCE_DISTRO=true 00:02:14.614 VAGRANT_BOX_VERSION= 00:02:14.614 EXTRA_VAGRANTFILES= 00:02:14.614 NIC_MODEL=e1000 00:02:14.614 00:02:14.614 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:14.614 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:17.160 Bringing machine 'default' up with 'libvirt' provider... 00:02:17.735 ==> default: Creating image (snapshot of base box volume). 00:02:17.735 ==> default: Creating domain with the following settings... 00:02:17.735 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734386303_e95511c898c517f40626 00:02:17.735 ==> default: -- Domain type: kvm 00:02:17.735 ==> default: -- Cpus: 10 00:02:17.735 ==> default: -- Feature: acpi 00:02:17.735 ==> default: -- Feature: apic 00:02:17.735 ==> default: -- Feature: pae 00:02:17.735 ==> default: -- Memory: 12288M 00:02:17.735 ==> default: -- Memory Backing: hugepages: 00:02:17.735 ==> default: -- Management MAC: 00:02:17.735 ==> default: -- Loader: 00:02:17.735 ==> default: -- Nvram: 00:02:17.735 ==> default: -- Base box: spdk/fedora39 00:02:18.009 ==> default: -- Storage pool: default 00:02:18.009 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734386303_e95511c898c517f40626.img (20G) 00:02:18.009 ==> default: -- Volume Cache: default 00:02:18.009 ==> default: -- Kernel: 00:02:18.009 ==> default: -- Initrd: 00:02:18.009 ==> default: -- Graphics Type: vnc 00:02:18.009 ==> default: -- Graphics Port: -1 00:02:18.009 ==> default: -- Graphics IP: 127.0.0.1 00:02:18.009 ==> default: -- Graphics Password: Not defined 00:02:18.009 ==> default: -- Video Type: cirrus 00:02:18.009 ==> default: -- Video VRAM: 9216 00:02:18.009 ==> default: -- Sound Type: 00:02:18.009 ==> default: -- Keymap: en-us 00:02:18.009 ==> default: -- TPM Path: 00:02:18.009 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:18.009 ==> default: -- Command line args: 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:18.009 ==> default: -> value=-drive, 00:02:18.009 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:18.009 ==> default: -> value=-device, 00:02:18.009 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.009 ==> default: Creating shared folders metadata... 00:02:18.009 ==> default: Starting domain. 00:02:19.924 ==> default: Waiting for domain to get an IP address... 00:02:38.085 ==> default: Waiting for SSH to become available... 00:02:38.085 ==> default: Configuring and enabling network interfaces... 00:02:42.293 default: SSH address: 192.168.121.252:22 00:02:42.293 default: SSH username: vagrant 00:02:42.293 default: SSH auth method: private key 00:02:43.697 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:50.256 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:54.442 ==> default: Mounting SSHFS shared folder... 00:02:55.009 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:55.009 ==> default: Checking Mount.. 00:02:56.384 ==> default: Folder Successfully Mounted! 00:02:56.384 00:02:56.384 SUCCESS! 00:02:56.384 00:02:56.384 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.384 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.384 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.384 00:02:56.392 [Pipeline] } 00:02:56.407 [Pipeline] // stage 00:02:56.416 [Pipeline] dir 00:02:56.416 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:56.418 [Pipeline] { 00:02:56.430 [Pipeline] catchError 00:02:56.432 [Pipeline] { 00:02:56.444 [Pipeline] sh 00:02:56.724 + vagrant ssh-config --host vagrant 00:02:56.724 + sed -ne '/^Host/,$p' 00:02:56.724 + tee ssh_conf 00:02:59.252 Host vagrant 00:02:59.252 HostName 192.168.121.252 00:02:59.252 User vagrant 00:02:59.252 Port 22 00:02:59.252 UserKnownHostsFile /dev/null 00:02:59.252 StrictHostKeyChecking no 00:02:59.252 PasswordAuthentication no 00:02:59.252 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:59.252 IdentitiesOnly yes 00:02:59.252 LogLevel FATAL 00:02:59.252 ForwardAgent yes 00:02:59.252 ForwardX11 yes 00:02:59.252 00:02:59.264 [Pipeline] withEnv 00:02:59.266 [Pipeline] { 00:02:59.278 [Pipeline] sh 00:02:59.555 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:59.555 source /etc/os-release 00:02:59.555 [[ -e /image.version ]] && img=$(< /image.version) 00:02:59.555 # Minimal, systemd-like check. 00:02:59.555 if [[ -e /.dockerenv ]]; then 00:02:59.555 # Clear garbage from the node'\''s name: 00:02:59.555 # agt-er_autotest_547-896 -> autotest_547-896 00:02:59.555 # $HOSTNAME is the actual container id 00:02:59.555 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:59.555 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:59.555 # We can assume this is a mount from a host where container is running, 00:02:59.555 # so fetch its hostname to easily identify the target swarm worker. 00:02:59.555 container="$(< /etc/hostname) ($agent)" 00:02:59.555 else 00:02:59.555 # Fallback 00:02:59.555 container=$agent 00:02:59.555 fi 00:02:59.555 fi 00:02:59.555 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:59.555 ' 00:02:59.566 [Pipeline] } 00:02:59.583 [Pipeline] // withEnv 00:02:59.591 [Pipeline] setCustomBuildProperty 00:02:59.606 [Pipeline] stage 00:02:59.608 [Pipeline] { (Tests) 00:02:59.625 [Pipeline] sh 00:02:59.903 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.173 [Pipeline] sh 00:03:00.451 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:00.463 [Pipeline] timeout 00:03:00.464 Timeout set to expire in 50 min 00:03:00.465 [Pipeline] { 00:03:00.479 [Pipeline] sh 00:03:00.757 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:01.014 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:03:01.026 [Pipeline] sh 00:03:01.303 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:01.315 [Pipeline] sh 00:03:01.593 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:01.609 [Pipeline] sh 00:03:01.922 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:01.922 ++ readlink -f spdk_repo 00:03:01.922 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:01.922 + [[ -n /home/vagrant/spdk_repo ]] 00:03:01.922 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:01.922 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:01.922 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:01.922 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:01.922 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:01.922 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:01.922 + cd /home/vagrant/spdk_repo 00:03:01.922 + source /etc/os-release 00:03:01.922 ++ NAME='Fedora Linux' 00:03:01.922 ++ VERSION='39 (Cloud Edition)' 00:03:01.922 ++ ID=fedora 00:03:01.922 ++ VERSION_ID=39 00:03:01.922 ++ VERSION_CODENAME= 00:03:01.922 ++ PLATFORM_ID=platform:f39 00:03:01.922 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:01.922 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:01.922 ++ LOGO=fedora-logo-icon 00:03:01.922 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:01.922 ++ HOME_URL=https://fedoraproject.org/ 00:03:01.922 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:01.922 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:01.922 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:01.922 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:01.922 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:01.922 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:01.922 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:01.922 ++ SUPPORT_END=2024-11-12 00:03:01.922 ++ VARIANT='Cloud Edition' 00:03:01.922 ++ VARIANT_ID=cloud 00:03:01.922 + uname -a 00:03:01.922 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:01.922 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:02.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:02.440 Hugepages 00:03:02.440 node hugesize free / total 00:03:02.440 node0 1048576kB 0 / 0 00:03:02.440 node0 2048kB 0 / 0 00:03:02.440 00:03:02.440 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.440 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:02.440 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:02.440 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:02.699 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:02.699 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:02.699 + rm -f /tmp/spdk-ld-path 00:03:02.699 + source autorun-spdk.conf 00:03:02.699 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.699 ++ SPDK_TEST_NVME=1 00:03:02.699 ++ SPDK_TEST_FTL=1 00:03:02.699 ++ SPDK_TEST_ISAL=1 00:03:02.699 ++ SPDK_RUN_ASAN=1 00:03:02.699 ++ SPDK_RUN_UBSAN=1 00:03:02.699 ++ SPDK_TEST_XNVME=1 00:03:02.699 ++ SPDK_TEST_NVME_FDP=1 00:03:02.699 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:02.699 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:02.699 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:02.699 ++ RUN_NIGHTLY=1 00:03:02.699 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:02.699 + [[ -n '' ]] 00:03:02.699 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:02.699 + for M in /var/spdk/build-*-manifest.txt 00:03:02.699 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:02.699 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.699 + for M in /var/spdk/build-*-manifest.txt 00:03:02.699 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:02.699 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.699 + for M in /var/spdk/build-*-manifest.txt 00:03:02.699 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:02.699 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:02.699 ++ uname 00:03:02.699 + [[ Linux == \L\i\n\u\x ]] 00:03:02.699 + sudo dmesg -T 00:03:02.699 + sudo dmesg --clear 00:03:02.699 + dmesg_pid=5756 00:03:02.699 + [[ Fedora Linux == FreeBSD ]] 00:03:02.699 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.699 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:02.699 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:02.699 + sudo dmesg -Tw 00:03:02.699 + [[ -x /usr/src/fio-static/fio ]] 00:03:02.699 + export FIO_BIN=/usr/src/fio-static/fio 00:03:02.699 + FIO_BIN=/usr/src/fio-static/fio 00:03:02.699 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:02.699 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:02.699 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:02.699 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.699 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:02.699 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:02.699 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.699 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:02.699 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.699 21:59:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:02.699 21:59:08 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:02.699 21:59:08 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:02.699 21:59:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:02.699 21:59:08 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.699 21:59:08 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:02.699 21:59:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.699 21:59:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:02.699 21:59:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.700 21:59:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.700 21:59:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.700 21:59:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.700 21:59:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.700 21:59:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.700 21:59:08 -- paths/export.sh@5 -- $ export PATH 00:03:02.700 21:59:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.700 21:59:08 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.700 21:59:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:02.700 21:59:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734386349.XXXXXX 00:03:02.700 21:59:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734386349.bjSYch 00:03:02.700 21:59:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:02.700 21:59:09 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:03:02.700 21:59:09 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:02.700 21:59:09 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:02.700 21:59:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:02.700 21:59:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.700 21:59:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:02.700 21:59:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:02.700 21:59:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.700 21:59:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:02.700 21:59:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:02.700 21:59:09 -- pm/common@17 -- $ local monitor 00:03:02.700 21:59:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.700 21:59:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.700 21:59:09 -- pm/common@25 -- $ sleep 1 00:03:02.700 21:59:09 -- pm/common@21 -- $ date +%s 00:03:02.700 21:59:09 -- pm/common@21 -- $ date +%s 00:03:02.700 21:59:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734386349 00:03:02.700 21:59:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734386349 00:03:02.959 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734386349_collect-cpu-load.pm.log 00:03:02.959 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734386349_collect-vmstat.pm.log 00:03:03.895 21:59:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:03.895 21:59:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.895 21:59:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.895 21:59:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:03.895 21:59:10 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.895 Mon Dec 16 09:59:10 PM UTC 2024 00:03:03.895 21:59:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.895 v25.01-rc1-2-ge01cb43b8 00:03:03.895 21:59:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:03.895 21:59:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:03.895 21:59:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.895 21:59:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.895 21:59:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.895 ************************************ 00:03:03.895 START TEST asan 00:03:03.895 ************************************ 00:03:03.895 using asan 00:03:03.895 21:59:10 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:03.895 00:03:03.895 real 0m0.000s 00:03:03.895 user 0m0.000s 00:03:03.895 sys 0m0.000s 00:03:03.895 21:59:10 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:03.895 ************************************ 00:03:03.895 END TEST asan 00:03:03.895 21:59:10 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.895 ************************************ 00:03:03.895 21:59:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.895 21:59:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.895 21:59:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.895 21:59:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.895 21:59:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.895 ************************************ 00:03:03.895 START TEST ubsan 00:03:03.895 ************************************ 00:03:03.895 using ubsan 00:03:03.895 21:59:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:03.895 00:03:03.895 real 0m0.000s 00:03:03.895 user 0m0.000s 00:03:03.895 sys 0m0.000s 00:03:03.895 ************************************ 00:03:03.895 END TEST ubsan 00:03:03.895 ************************************ 00:03:03.895 21:59:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:03.895 21:59:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.895 21:59:10 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:03.895 21:59:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:03.895 21:59:10 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:03.895 21:59:10 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:03.895 21:59:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.895 21:59:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.895 ************************************ 00:03:03.895 START TEST build_native_dpdk 00:03:03.895 ************************************ 00:03:03.895 21:59:10 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:03.895 eeb0605f11 version: 23.11.0 00:03:03.895 238778122a doc: update release notes for 23.11 00:03:03.895 46aa6b3cfc doc: fix description of RSS features 00:03:03.895 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:03.895 7e421ae345 devtools: support skipping forbid rule check 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:03.895 21:59:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.895 21:59:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:03.896 patching file config/rte_config.h 00:03:03.896 Hunk #1 succeeded at 60 (offset 1 line). 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:03.896 patching file lib/pcapng/rte_pcapng.c 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.896 21:59:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:03.896 21:59:10 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:08.085 The Meson build system 00:03:08.085 Version: 1.5.0 00:03:08.085 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:08.085 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:08.086 Build type: native build 00:03:08.086 Program cat found: YES (/usr/bin/cat) 00:03:08.086 Project name: DPDK 00:03:08.086 Project version: 23.11.0 00:03:08.086 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:08.086 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:08.086 Host machine cpu family: x86_64 00:03:08.086 Host machine cpu: x86_64 00:03:08.086 Message: ## Building in Developer Mode ## 00:03:08.086 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.086 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:08.086 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.086 Program python3 found: YES (/usr/bin/python3) 00:03:08.086 Program cat found: YES (/usr/bin/cat) 00:03:08.086 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:08.086 Compiler for C supports arguments -march=native: YES 00:03:08.086 Checking for size of "void *" : 8 00:03:08.086 Checking for size of "void *" : 8 (cached) 00:03:08.086 Library m found: YES 00:03:08.086 Library numa found: YES 00:03:08.086 Has header "numaif.h" : YES 00:03:08.086 Library fdt found: NO 00:03:08.086 Library execinfo found: NO 00:03:08.086 Has header "execinfo.h" : YES 00:03:08.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:08.086 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.086 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.086 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.086 Run-time dependency openssl found: YES 3.1.1 00:03:08.086 Run-time dependency libpcap found: YES 1.10.4 00:03:08.086 Has header "pcap.h" with dependency libpcap: YES 00:03:08.086 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.086 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.086 Compiler for C supports arguments -Wformat: YES 00:03:08.086 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:08.086 Compiler for C supports arguments -Wformat-security: NO 00:03:08.086 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.086 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.086 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.086 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.086 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.086 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.086 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.086 Compiler for C supports arguments -Wundef: YES 00:03:08.086 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.086 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.086 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:08.086 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.086 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:08.086 Program objdump found: YES (/usr/bin/objdump) 00:03:08.086 Compiler for C supports arguments -mavx512f: YES 00:03:08.086 Checking if "AVX512 checking" compiles: YES 00:03:08.086 Fetching value of define "__SSE4_2__" : 1 00:03:08.086 Fetching value of define "__AES__" : 1 00:03:08.086 Fetching value of define "__AVX__" : 1 00:03:08.086 Fetching value of define "__AVX2__" : 1 00:03:08.086 Fetching value of define "__AVX512BW__" : 1 00:03:08.086 Fetching value of define "__AVX512CD__" : 1 00:03:08.086 Fetching value of define "__AVX512DQ__" : 1 00:03:08.086 Fetching value of define "__AVX512F__" : 1 00:03:08.086 Fetching value of define "__AVX512VL__" : 1 00:03:08.086 Fetching value of define "__PCLMUL__" : 1 00:03:08.086 Fetching value of define "__RDRND__" : 1 00:03:08.086 Fetching value of define "__RDSEED__" : 1 00:03:08.086 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:08.086 Fetching value of define "__znver1__" : (undefined) 00:03:08.086 Fetching value of define "__znver2__" : (undefined) 00:03:08.086 Fetching value of define "__znver3__" : (undefined) 00:03:08.086 Fetching value of define "__znver4__" : (undefined) 00:03:08.086 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:08.086 Message: lib/log: Defining dependency "log" 00:03:08.086 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.086 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.086 Checking for function "getentropy" : NO 00:03:08.086 Message: lib/eal: Defining dependency "eal" 00:03:08.086 Message: lib/ring: Defining dependency "ring" 00:03:08.086 Message: lib/rcu: Defining dependency "rcu" 00:03:08.086 Message: lib/mempool: Defining dependency "mempool" 00:03:08.086 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.086 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.086 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:08.086 Compiler for C supports arguments -mpclmul: YES 00:03:08.086 Compiler for C supports arguments -maes: YES 00:03:08.086 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.086 Compiler for C supports arguments -mavx512bw: YES 00:03:08.086 Compiler for C supports arguments -mavx512dq: YES 00:03:08.086 Compiler for C supports arguments -mavx512vl: YES 00:03:08.086 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.086 Compiler for C supports arguments -mavx2: YES 00:03:08.086 Compiler for C supports arguments -mavx: YES 00:03:08.086 Message: lib/net: Defining dependency "net" 00:03:08.086 Message: lib/meter: Defining dependency "meter" 00:03:08.086 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.086 Message: lib/pci: Defining dependency "pci" 00:03:08.086 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.086 Message: lib/metrics: Defining dependency "metrics" 00:03:08.086 Message: lib/hash: Defining dependency "hash" 00:03:08.086 Message: lib/timer: Defining dependency "timer" 00:03:08.086 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.086 Message: lib/acl: Defining dependency "acl" 00:03:08.086 Message: lib/bbdev: Defining dependency "bbdev" 00:03:08.086 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:08.086 Run-time dependency libelf found: YES 0.191 00:03:08.086 Message: lib/bpf: Defining dependency "bpf" 00:03:08.086 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:08.086 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.086 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.086 Message: lib/distributor: Defining dependency "distributor" 00:03:08.086 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.086 Message: lib/efd: Defining dependency "efd" 00:03:08.086 Message: lib/eventdev: Defining dependency "eventdev" 00:03:08.086 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:08.086 Message: lib/gpudev: Defining dependency "gpudev" 00:03:08.086 Message: lib/gro: Defining dependency "gro" 00:03:08.086 Message: lib/gso: Defining dependency "gso" 00:03:08.086 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:08.086 Message: lib/jobstats: Defining dependency "jobstats" 00:03:08.086 Message: lib/latencystats: Defining dependency "latencystats" 00:03:08.086 Message: lib/lpm: Defining dependency "lpm" 00:03:08.086 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512IFMA__" : 1 00:03:08.086 Message: lib/member: Defining dependency "member" 00:03:08.086 Message: lib/pcapng: Defining dependency "pcapng" 00:03:08.086 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.086 Message: lib/power: Defining dependency "power" 00:03:08.086 Message: lib/rawdev: Defining dependency "rawdev" 00:03:08.086 Message: lib/regexdev: Defining dependency "regexdev" 00:03:08.086 Message: lib/mldev: Defining dependency "mldev" 00:03:08.086 Message: lib/rib: Defining dependency "rib" 00:03:08.086 Message: lib/reorder: Defining dependency "reorder" 00:03:08.086 Message: lib/sched: Defining dependency "sched" 00:03:08.086 Message: lib/security: Defining dependency "security" 00:03:08.086 Message: lib/stack: Defining dependency "stack" 00:03:08.086 Has header "linux/userfaultfd.h" : YES 00:03:08.086 Has header "linux/vduse.h" : YES 00:03:08.086 Message: lib/vhost: Defining dependency "vhost" 00:03:08.086 Message: lib/ipsec: Defining dependency "ipsec" 00:03:08.086 Message: lib/pdcp: Defining dependency "pdcp" 00:03:08.086 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.086 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.086 Message: lib/fib: Defining dependency "fib" 00:03:08.086 Message: lib/port: Defining dependency "port" 00:03:08.086 Message: lib/pdump: Defining dependency "pdump" 00:03:08.086 Message: lib/table: Defining dependency "table" 00:03:08.086 Message: lib/pipeline: Defining dependency "pipeline" 00:03:08.086 Message: lib/graph: Defining dependency "graph" 00:03:08.086 Message: lib/node: Defining dependency "node" 00:03:08.086 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.086 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.086 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.086 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.462 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:09.462 Compiler for C supports arguments -Wno-unused-value: YES 00:03:09.462 Compiler for C supports arguments -Wno-format: YES 00:03:09.462 Compiler for C supports arguments -Wno-format-security: YES 00:03:09.462 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:09.462 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:09.462 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:09.462 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:09.462 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.462 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.462 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.462 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:09.462 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:09.462 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:09.462 Has header "sys/epoll.h" : YES 00:03:09.462 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:09.462 Configuring doxy-api-html.conf using configuration 00:03:09.462 Configuring doxy-api-man.conf using configuration 00:03:09.462 Program mandb found: YES (/usr/bin/mandb) 00:03:09.462 Program sphinx-build found: NO 00:03:09.462 Configuring rte_build_config.h using configuration 00:03:09.462 Message: 00:03:09.462 ================= 00:03:09.462 Applications Enabled 00:03:09.462 ================= 00:03:09.462 00:03:09.462 apps: 00:03:09.462 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:09.462 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:09.462 test-pmd, test-regex, test-sad, test-security-perf, 00:03:09.462 00:03:09.462 Message: 00:03:09.462 ================= 00:03:09.462 Libraries Enabled 00:03:09.462 ================= 00:03:09.462 00:03:09.462 libs: 00:03:09.462 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:09.462 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:09.462 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:09.462 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:09.462 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:09.462 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:09.462 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:09.462 00:03:09.462 00:03:09.462 Message: 00:03:09.462 =============== 00:03:09.462 Drivers Enabled 00:03:09.462 =============== 00:03:09.462 00:03:09.462 common: 00:03:09.462 00:03:09.462 bus: 00:03:09.462 pci, vdev, 00:03:09.462 mempool: 00:03:09.462 ring, 00:03:09.462 dma: 00:03:09.462 00:03:09.462 net: 00:03:09.462 i40e, 00:03:09.462 raw: 00:03:09.462 00:03:09.462 crypto: 00:03:09.462 00:03:09.462 compress: 00:03:09.462 00:03:09.462 regex: 00:03:09.462 00:03:09.462 ml: 00:03:09.462 00:03:09.462 vdpa: 00:03:09.462 00:03:09.462 event: 00:03:09.462 00:03:09.462 baseband: 00:03:09.462 00:03:09.462 gpu: 00:03:09.462 00:03:09.462 00:03:09.462 Message: 00:03:09.462 ================= 00:03:09.462 Content Skipped 00:03:09.462 ================= 00:03:09.462 00:03:09.462 apps: 00:03:09.462 00:03:09.462 libs: 00:03:09.462 00:03:09.462 drivers: 00:03:09.462 common/cpt: not in enabled drivers build config 00:03:09.462 common/dpaax: not in enabled drivers build config 00:03:09.462 common/iavf: not in enabled drivers build config 00:03:09.462 common/idpf: not in enabled drivers build config 00:03:09.462 common/mvep: not in enabled drivers build config 00:03:09.462 common/octeontx: not in enabled drivers build config 00:03:09.462 bus/auxiliary: not in enabled drivers build config 00:03:09.462 bus/cdx: not in enabled drivers build config 00:03:09.462 bus/dpaa: not in enabled drivers build config 00:03:09.462 bus/fslmc: not in enabled drivers build config 00:03:09.462 bus/ifpga: not in enabled drivers build config 00:03:09.462 bus/platform: not in enabled drivers build config 00:03:09.462 bus/vmbus: not in enabled drivers build config 00:03:09.462 common/cnxk: not in enabled drivers build config 00:03:09.462 common/mlx5: not in enabled drivers build config 00:03:09.462 common/nfp: not in enabled drivers build config 00:03:09.462 common/qat: not in enabled drivers build config 00:03:09.462 common/sfc_efx: not in enabled drivers build config 00:03:09.462 mempool/bucket: not in enabled drivers build config 00:03:09.462 mempool/cnxk: not in enabled drivers build config 00:03:09.463 mempool/dpaa: not in enabled drivers build config 00:03:09.463 mempool/dpaa2: not in enabled drivers build config 00:03:09.463 mempool/octeontx: not in enabled drivers build config 00:03:09.463 mempool/stack: not in enabled drivers build config 00:03:09.463 dma/cnxk: not in enabled drivers build config 00:03:09.463 dma/dpaa: not in enabled drivers build config 00:03:09.463 dma/dpaa2: not in enabled drivers build config 00:03:09.463 dma/hisilicon: not in enabled drivers build config 00:03:09.463 dma/idxd: not in enabled drivers build config 00:03:09.463 dma/ioat: not in enabled drivers build config 00:03:09.463 dma/skeleton: not in enabled drivers build config 00:03:09.463 net/af_packet: not in enabled drivers build config 00:03:09.463 net/af_xdp: not in enabled drivers build config 00:03:09.463 net/ark: not in enabled drivers build config 00:03:09.463 net/atlantic: not in enabled drivers build config 00:03:09.463 net/avp: not in enabled drivers build config 00:03:09.463 net/axgbe: not in enabled drivers build config 00:03:09.463 net/bnx2x: not in enabled drivers build config 00:03:09.463 net/bnxt: not in enabled drivers build config 00:03:09.463 net/bonding: not in enabled drivers build config 00:03:09.463 net/cnxk: not in enabled drivers build config 00:03:09.463 net/cpfl: not in enabled drivers build config 00:03:09.463 net/cxgbe: not in enabled drivers build config 00:03:09.463 net/dpaa: not in enabled drivers build config 00:03:09.463 net/dpaa2: not in enabled drivers build config 00:03:09.463 net/e1000: not in enabled drivers build config 00:03:09.463 net/ena: not in enabled drivers build config 00:03:09.463 net/enetc: not in enabled drivers build config 00:03:09.463 net/enetfec: not in enabled drivers build config 00:03:09.463 net/enic: not in enabled drivers build config 00:03:09.463 net/failsafe: not in enabled drivers build config 00:03:09.463 net/fm10k: not in enabled drivers build config 00:03:09.463 net/gve: not in enabled drivers build config 00:03:09.463 net/hinic: not in enabled drivers build config 00:03:09.463 net/hns3: not in enabled drivers build config 00:03:09.463 net/iavf: not in enabled drivers build config 00:03:09.463 net/ice: not in enabled drivers build config 00:03:09.463 net/idpf: not in enabled drivers build config 00:03:09.463 net/igc: not in enabled drivers build config 00:03:09.463 net/ionic: not in enabled drivers build config 00:03:09.463 net/ipn3ke: not in enabled drivers build config 00:03:09.463 net/ixgbe: not in enabled drivers build config 00:03:09.463 net/mana: not in enabled drivers build config 00:03:09.463 net/memif: not in enabled drivers build config 00:03:09.463 net/mlx4: not in enabled drivers build config 00:03:09.463 net/mlx5: not in enabled drivers build config 00:03:09.463 net/mvneta: not in enabled drivers build config 00:03:09.463 net/mvpp2: not in enabled drivers build config 00:03:09.463 net/netvsc: not in enabled drivers build config 00:03:09.463 net/nfb: not in enabled drivers build config 00:03:09.463 net/nfp: not in enabled drivers build config 00:03:09.463 net/ngbe: not in enabled drivers build config 00:03:09.463 net/null: not in enabled drivers build config 00:03:09.463 net/octeontx: not in enabled drivers build config 00:03:09.463 net/octeon_ep: not in enabled drivers build config 00:03:09.463 net/pcap: not in enabled drivers build config 00:03:09.463 net/pfe: not in enabled drivers build config 00:03:09.463 net/qede: not in enabled drivers build config 00:03:09.463 net/ring: not in enabled drivers build config 00:03:09.463 net/sfc: not in enabled drivers build config 00:03:09.463 net/softnic: not in enabled drivers build config 00:03:09.463 net/tap: not in enabled drivers build config 00:03:09.463 net/thunderx: not in enabled drivers build config 00:03:09.463 net/txgbe: not in enabled drivers build config 00:03:09.463 net/vdev_netvsc: not in enabled drivers build config 00:03:09.463 net/vhost: not in enabled drivers build config 00:03:09.463 net/virtio: not in enabled drivers build config 00:03:09.463 net/vmxnet3: not in enabled drivers build config 00:03:09.463 raw/cnxk_bphy: not in enabled drivers build config 00:03:09.463 raw/cnxk_gpio: not in enabled drivers build config 00:03:09.463 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:09.463 raw/ifpga: not in enabled drivers build config 00:03:09.463 raw/ntb: not in enabled drivers build config 00:03:09.463 raw/skeleton: not in enabled drivers build config 00:03:09.463 crypto/armv8: not in enabled drivers build config 00:03:09.463 crypto/bcmfs: not in enabled drivers build config 00:03:09.463 crypto/caam_jr: not in enabled drivers build config 00:03:09.463 crypto/ccp: not in enabled drivers build config 00:03:09.463 crypto/cnxk: not in enabled drivers build config 00:03:09.463 crypto/dpaa_sec: not in enabled drivers build config 00:03:09.463 crypto/dpaa2_sec: not in enabled drivers build config 00:03:09.463 crypto/ipsec_mb: not in enabled drivers build config 00:03:09.463 crypto/mlx5: not in enabled drivers build config 00:03:09.463 crypto/mvsam: not in enabled drivers build config 00:03:09.463 crypto/nitrox: not in enabled drivers build config 00:03:09.463 crypto/null: not in enabled drivers build config 00:03:09.463 crypto/octeontx: not in enabled drivers build config 00:03:09.463 crypto/openssl: not in enabled drivers build config 00:03:09.463 crypto/scheduler: not in enabled drivers build config 00:03:09.463 crypto/uadk: not in enabled drivers build config 00:03:09.463 crypto/virtio: not in enabled drivers build config 00:03:09.463 compress/isal: not in enabled drivers build config 00:03:09.463 compress/mlx5: not in enabled drivers build config 00:03:09.463 compress/octeontx: not in enabled drivers build config 00:03:09.463 compress/zlib: not in enabled drivers build config 00:03:09.463 regex/mlx5: not in enabled drivers build config 00:03:09.463 regex/cn9k: not in enabled drivers build config 00:03:09.463 ml/cnxk: not in enabled drivers build config 00:03:09.463 vdpa/ifc: not in enabled drivers build config 00:03:09.463 vdpa/mlx5: not in enabled drivers build config 00:03:09.463 vdpa/nfp: not in enabled drivers build config 00:03:09.463 vdpa/sfc: not in enabled drivers build config 00:03:09.463 event/cnxk: not in enabled drivers build config 00:03:09.463 event/dlb2: not in enabled drivers build config 00:03:09.463 event/dpaa: not in enabled drivers build config 00:03:09.463 event/dpaa2: not in enabled drivers build config 00:03:09.463 event/dsw: not in enabled drivers build config 00:03:09.463 event/opdl: not in enabled drivers build config 00:03:09.463 event/skeleton: not in enabled drivers build config 00:03:09.463 event/sw: not in enabled drivers build config 00:03:09.463 event/octeontx: not in enabled drivers build config 00:03:09.463 baseband/acc: not in enabled drivers build config 00:03:09.463 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:09.463 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:09.463 baseband/la12xx: not in enabled drivers build config 00:03:09.463 baseband/null: not in enabled drivers build config 00:03:09.463 baseband/turbo_sw: not in enabled drivers build config 00:03:09.463 gpu/cuda: not in enabled drivers build config 00:03:09.463 00:03:09.463 00:03:09.463 Build targets in project: 215 00:03:09.463 00:03:09.463 DPDK 23.11.0 00:03:09.463 00:03:09.463 User defined options 00:03:09.463 libdir : lib 00:03:09.463 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:09.463 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:09.463 c_link_args : 00:03:09.463 enable_docs : false 00:03:09.463 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:09.463 enable_kmods : false 00:03:09.463 machine : native 00:03:09.463 tests : false 00:03:09.463 00:03:09.463 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:09.463 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:09.721 21:59:15 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:09.721 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.721 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.721 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.721 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:09.721 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.721 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.721 [6/705] Linking static target lib/librte_kvargs.a 00:03:09.721 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.721 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.721 [9/705] Linking static target lib/librte_log.a 00:03:09.979 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.979 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.979 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:09.979 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:09.979 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.236 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.236 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.236 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.236 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.236 [19/705] Linking target lib/librte_log.so.24.0 00:03:10.236 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.237 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.237 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.494 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.494 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.494 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.494 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:10.494 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.494 [28/705] Linking target lib/librte_kvargs.so.24.0 00:03:10.494 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.494 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.494 [31/705] Linking static target lib/librte_telemetry.a 00:03:10.751 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:10.751 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.751 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.751 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.751 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.751 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.751 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.751 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.751 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.751 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:11.009 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.009 [43/705] Linking target lib/librte_telemetry.so.24.0 00:03:11.009 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.009 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:11.009 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.267 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:11.267 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.267 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.267 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.267 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.267 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.267 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.267 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.267 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.524 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.524 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.524 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.524 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.524 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.524 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.524 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.524 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.524 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.524 [65/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.524 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.782 [67/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.782 [68/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.782 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.782 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:11.782 [71/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:11.782 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:11.782 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:11.782 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:11.782 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:11.782 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.039 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.039 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.039 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.039 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:12.039 [81/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:12.039 [82/705] Linking static target lib/librte_ring.a 00:03:12.296 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.296 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.296 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:12.296 [86/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.296 [87/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.297 [88/705] Linking static target lib/librte_eal.a 00:03:12.297 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:12.297 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:12.297 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:12.554 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.554 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.554 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.554 [95/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.554 [96/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.554 [97/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.812 [98/705] Linking static target lib/librte_rcu.a 00:03:12.812 [99/705] Linking static target lib/librte_mempool.a 00:03:12.812 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.812 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.812 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.812 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:12.813 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.813 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:12.813 [106/705] Linking static target lib/librte_meter.a 00:03:13.070 [107/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:13.070 [108/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.071 [109/705] Linking static target lib/librte_mbuf.a 00:03:13.071 [110/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:13.071 [111/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.071 [112/705] Linking static target lib/librte_net.a 00:03:13.071 [113/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.071 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.071 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.071 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.328 [117/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.328 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.328 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:13.588 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:13.588 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:13.588 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:13.864 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:13.864 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.864 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:13.864 [126/705] Linking static target lib/librte_pci.a 00:03:13.864 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.864 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.864 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.864 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.864 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.864 [132/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.864 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.864 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.864 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.864 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.122 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:14.122 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.122 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:14.122 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.122 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.122 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.122 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.122 [144/705] Linking static target lib/librte_cmdline.a 00:03:14.380 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:14.380 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:14.380 [147/705] Linking static target lib/librte_metrics.a 00:03:14.380 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.380 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:14.638 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.638 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.638 [152/705] Linking static target lib/librte_timer.a 00:03:14.638 [153/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.638 [154/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.638 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.895 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:14.895 [157/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:14.895 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:15.153 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:15.153 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:15.153 [161/705] Linking static target lib/librte_bitratestats.a 00:03:15.410 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:15.410 [163/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.410 [164/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:15.410 [165/705] Linking static target lib/librte_bbdev.a 00:03:15.410 [166/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:15.668 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:15.668 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:15.668 [169/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.668 [170/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:15.926 [171/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:15.926 [172/705] Linking static target lib/acl/libavx2_tmp.a 00:03:15.926 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:15.926 [174/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:15.926 [175/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.926 [176/705] Linking static target lib/librte_ethdev.a 00:03:16.183 [177/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:16.183 [178/705] Linking static target lib/librte_cfgfile.a 00:03:16.183 [179/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.183 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:16.183 [181/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.183 [182/705] Linking static target lib/librte_hash.a 00:03:16.183 [183/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:16.183 [184/705] Linking target lib/librte_eal.so.24.0 00:03:16.183 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:16.184 [186/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:16.184 [187/705] Linking target lib/librte_ring.so.24.0 00:03:16.443 [188/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.443 [189/705] Linking target lib/librte_meter.so.24.0 00:03:16.443 [190/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:16.443 [191/705] Linking target lib/librte_pci.so.24.0 00:03:16.443 [192/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.443 [193/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:16.443 [194/705] Linking target lib/librte_rcu.so.24.0 00:03:16.443 [195/705] Linking target lib/librte_timer.so.24.0 00:03:16.443 [196/705] Linking target lib/librte_mempool.so.24.0 00:03:16.443 [197/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:16.443 [198/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:16.443 [199/705] Linking target lib/librte_cfgfile.so.24.0 00:03:16.443 [200/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:16.443 [201/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.443 [202/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.443 [203/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:16.443 [204/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:16.704 [205/705] Linking target lib/librte_mbuf.so.24.0 00:03:16.704 [206/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.704 [207/705] Linking static target lib/librte_compressdev.a 00:03:16.704 [208/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:16.704 [209/705] Linking static target lib/librte_acl.a 00:03:16.704 [210/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:16.704 [211/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:16.704 [212/705] Linking target lib/librte_net.so.24.0 00:03:16.704 [213/705] Linking target lib/librte_bbdev.so.24.0 00:03:16.704 [214/705] Linking static target lib/librte_bpf.a 00:03:16.704 [215/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:16.704 [216/705] Linking target lib/librte_cmdline.so.24.0 00:03:16.965 [217/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.965 [218/705] Linking target lib/librte_hash.so.24.0 00:03:16.965 [219/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.965 [220/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.965 [221/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:16.965 [222/705] Linking target lib/librte_acl.so.24.0 00:03:16.965 [223/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:16.965 [224/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.965 [225/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.965 [226/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:16.965 [227/705] Linking target lib/librte_compressdev.so.24.0 00:03:16.965 [228/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:16.965 [229/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:17.223 [230/705] Linking static target lib/librte_distributor.a 00:03:17.223 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.223 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:17.223 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.223 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:17.481 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.481 [236/705] Linking static target lib/librte_dmadev.a 00:03:17.481 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:17.481 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.739 [239/705] Linking target lib/librte_dmadev.so.24.0 00:03:17.739 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:17.739 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:17.739 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:17.739 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:17.739 [244/705] Linking static target lib/librte_efd.a 00:03:17.739 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:17.999 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.999 [247/705] Linking target lib/librte_efd.so.24.0 00:03:17.999 [248/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:18.260 [249/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:18.260 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:18.260 [251/705] Linking static target lib/librte_dispatcher.a 00:03:18.260 [252/705] Linking static target lib/librte_gpudev.a 00:03:18.260 [253/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:18.260 [254/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:18.260 [255/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.260 [256/705] Linking static target lib/librte_cryptodev.a 00:03:18.260 [257/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:18.518 [258/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.518 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:18.518 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:18.776 [261/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.776 [262/705] Linking target lib/librte_gpudev.so.24.0 00:03:18.776 [263/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:18.776 [264/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:18.776 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:18.776 [266/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:18.776 [267/705] Linking static target lib/librte_gro.a 00:03:18.776 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:18.776 [269/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:19.034 [270/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.034 [271/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:19.034 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:19.034 [273/705] Linking static target lib/librte_eventdev.a 00:03:19.034 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:19.034 [275/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:19.034 [276/705] Linking static target lib/librte_gso.a 00:03:19.034 [277/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.034 [278/705] Linking target lib/librte_cryptodev.so.24.0 00:03:19.034 [279/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.294 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:19.294 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:19.294 [282/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:19.294 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:19.294 [284/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.294 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:19.294 [286/705] Linking target lib/librte_ethdev.so.24.0 00:03:19.294 [287/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:19.294 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:19.294 [289/705] Linking static target lib/librte_ip_frag.a 00:03:19.294 [290/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:19.294 [291/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:19.294 [292/705] Linking target lib/librte_metrics.so.24.0 00:03:19.554 [293/705] Linking target lib/librte_bpf.so.24.0 00:03:19.554 [294/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:19.554 [295/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:19.554 [296/705] Linking target lib/librte_bitratestats.so.24.0 00:03:19.554 [297/705] Linking target lib/librte_gro.so.24.0 00:03:19.554 [298/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.554 [299/705] Linking static target lib/librte_jobstats.a 00:03:19.554 [300/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:19.554 [301/705] Linking target lib/librte_gso.so.24.0 00:03:19.554 [302/705] Linking target lib/librte_ip_frag.so.24.0 00:03:19.554 [303/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:19.554 [304/705] Linking static target lib/librte_latencystats.a 00:03:19.554 [305/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:19.554 [306/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:19.814 [307/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.814 [308/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.814 [309/705] Linking target lib/librte_jobstats.so.24.0 00:03:19.814 [310/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.814 [311/705] Linking target lib/librte_latencystats.so.24.0 00:03:19.814 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.814 [313/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:19.814 [314/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.814 [315/705] Linking static target lib/librte_lpm.a 00:03:20.073 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:20.073 [317/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:20.073 [318/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:20.073 [319/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.073 [320/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:20.073 [321/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:20.073 [322/705] Linking static target lib/librte_pcapng.a 00:03:20.073 [323/705] Linking target lib/librte_lpm.so.24.0 00:03:20.332 [324/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:20.332 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:20.332 [326/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:20.332 [327/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.332 [328/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:20.332 [329/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.332 [330/705] Linking target lib/librte_eventdev.so.24.0 00:03:20.332 [331/705] Linking target lib/librte_pcapng.so.24.0 00:03:20.332 [332/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:20.332 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.332 [334/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:20.332 [335/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:20.332 [336/705] Linking target lib/librte_dispatcher.so.24.0 00:03:20.332 [337/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.591 [338/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:20.591 [339/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.591 [340/705] Linking static target lib/librte_power.a 00:03:20.591 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:20.591 [342/705] Linking static target lib/librte_regexdev.a 00:03:20.591 [343/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.591 [344/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:20.591 [345/705] Linking static target lib/librte_member.a 00:03:20.591 [346/705] Linking static target lib/librte_rawdev.a 00:03:20.591 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:20.591 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:20.849 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:20.849 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:20.849 [351/705] Linking static target lib/librte_mldev.a 00:03:20.849 [352/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.849 [353/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.849 [354/705] Linking target lib/librte_member.so.24.0 00:03:20.849 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.849 [356/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.849 [357/705] Linking target lib/librte_rawdev.so.24.0 00:03:21.108 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:21.108 [359/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.108 [360/705] Linking target lib/librte_power.so.24.0 00:03:21.108 [361/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.108 [362/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:21.108 [363/705] Linking static target lib/librte_reorder.a 00:03:21.108 [364/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:21.108 [365/705] Linking target lib/librte_regexdev.so.24.0 00:03:21.108 [366/705] Linking static target lib/librte_rib.a 00:03:21.108 [367/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:21.108 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:21.108 [369/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:21.108 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:21.367 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:21.367 [372/705] Linking static target lib/librte_stack.a 00:03:21.367 [373/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:21.367 [374/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.367 [375/705] Linking static target lib/librte_security.a 00:03:21.367 [376/705] Linking target lib/librte_reorder.so.24.0 00:03:21.367 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:21.367 [378/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.367 [379/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.628 [380/705] Linking target lib/librte_stack.so.24.0 00:03:21.628 [381/705] Linking target lib/librte_rib.so.24.0 00:03:21.628 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.628 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:21.628 [384/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:21.628 [385/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.628 [386/705] Linking target lib/librte_security.so.24.0 00:03:21.628 [387/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.628 [388/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.628 [389/705] Linking target lib/librte_mldev.so.24.0 00:03:21.888 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:21.888 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.889 [392/705] Linking static target lib/librte_sched.a 00:03:21.889 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:21.889 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:22.149 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.149 [396/705] Linking target lib/librte_sched.so.24.0 00:03:22.149 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:22.149 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:22.149 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:22.409 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:22.409 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.409 [402/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:22.409 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:22.409 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:22.669 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:22.669 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:22.669 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:22.669 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:22.669 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.928 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.928 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:22.928 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.928 [413/705] Linking static target lib/librte_ipsec.a 00:03:22.928 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.928 [415/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:22.928 [416/705] Linking target lib/librte_ipsec.so.24.0 00:03:23.188 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:23.188 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:23.188 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:23.188 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:23.447 [421/705] Linking static target lib/librte_fib.a 00:03:23.447 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:23.447 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:23.447 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:23.447 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:23.447 [426/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.447 [427/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:23.447 [428/705] Linking target lib/librte_fib.so.24.0 00:03:23.708 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:23.708 [430/705] Linking static target lib/librte_pdcp.a 00:03:23.708 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.708 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:23.969 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:23.969 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:23.969 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.969 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:23.969 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.969 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:24.228 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:24.228 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:24.485 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:24.485 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:24.485 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:24.485 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:24.486 [445/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:24.486 [446/705] Linking static target lib/librte_port.a 00:03:24.486 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:24.486 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:24.486 [449/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:24.486 [450/705] Linking static target lib/librte_pdump.a 00:03:24.743 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:24.743 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.743 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:24.743 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.743 [455/705] Linking target lib/librte_port.so.24.0 00:03:25.001 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:25.001 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:25.001 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:25.001 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:25.001 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:25.001 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:25.259 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:25.259 [463/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:25.259 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:25.259 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:25.259 [466/705] Linking static target lib/librte_table.a 00:03:25.516 [467/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:25.517 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:25.517 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:25.775 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.775 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:25.775 [472/705] Linking target lib/librte_table.so.24.0 00:03:25.775 [473/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:25.775 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:25.775 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:26.033 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:26.033 [477/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:26.033 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:26.033 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:26.033 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:26.291 [481/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:26.291 [482/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:26.550 [483/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:26.550 [484/705] Linking static target lib/librte_graph.a 00:03:26.550 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:26.550 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:26.550 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:26.550 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:26.808 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.808 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:26.808 [491/705] Linking target lib/librte_graph.so.24.0 00:03:26.808 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:27.066 [493/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:27.066 [494/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:27.066 [495/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:27.066 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:27.066 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:27.066 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:27.066 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.066 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:27.324 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:27.324 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:27.324 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:27.582 [504/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:27.582 [505/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:27.582 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:27.582 [507/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:27.582 [508/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:27.582 [509/705] Linking static target lib/librte_node.a 00:03:27.582 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:27.582 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.840 [512/705] Linking target lib/librte_node.so.24.0 00:03:27.840 [513/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:27.840 [514/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:27.840 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:27.840 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:27.840 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:27.840 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.840 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:27.840 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:27.840 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.840 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.840 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:28.098 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:28.098 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:28.098 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.098 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:28.098 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.098 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:28.098 [530/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.098 [531/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:28.098 [532/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:28.357 [533/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.357 [534/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.357 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:28.357 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:28.357 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.357 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.357 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:28.357 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.357 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:28.357 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:28.616 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:28.875 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:28.875 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:29.134 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:29.391 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:29.391 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:29.391 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:29.391 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:29.650 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:29.650 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:29.650 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:29.908 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:29.908 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:29.908 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:30.169 [557/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:30.169 [558/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:30.429 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:30.429 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:30.429 [561/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:30.719 [562/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:30.719 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:30.719 [564/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:30.719 [565/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:30.719 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:30.719 [567/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:30.719 [568/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:30.980 [569/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:30.980 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:30.980 [571/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:30.980 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:30.980 [573/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:31.240 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:31.240 [575/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:31.240 [576/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:31.240 [577/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:31.240 [578/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:31.500 [579/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:31.500 [580/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:31.500 [581/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:31.500 [582/705] Linking static target drivers/librte_net_i40e.a 00:03:31.500 [583/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:31.500 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:31.764 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:31.764 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:31.764 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:31.764 [588/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:31.764 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.023 [590/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:32.023 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:32.023 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:32.023 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:32.280 [594/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:32.280 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:32.280 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:32.280 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:32.541 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:32.541 [599/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:32.541 [600/705] Linking static target lib/librte_vhost.a 00:03:32.541 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:32.541 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:32.801 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:32.801 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:32.801 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:32.801 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:32.801 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:32.801 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:33.063 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:33.063 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:33.063 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:33.063 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:33.322 [613/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:33.322 [614/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.322 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:33.322 [616/705] Linking target lib/librte_vhost.so.24.0 00:03:33.322 [617/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:33.322 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:33.887 [619/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:33.887 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:33.887 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:33.887 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:33.887 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:34.146 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:34.146 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:34.146 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:34.146 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:34.146 [628/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:34.146 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:34.146 [630/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:34.404 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:34.404 [632/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:34.404 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:34.404 [634/705] Linking static target lib/librte_pipeline.a 00:03:34.404 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:34.404 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:34.662 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:34.662 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:34.662 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:34.662 [640/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:34.662 [641/705] Linking target app/dpdk-dumpcap 00:03:34.662 [642/705] Linking target app/dpdk-graph 00:03:34.921 [643/705] Linking target app/dpdk-proc-info 00:03:34.921 [644/705] Linking target app/dpdk-test-acl 00:03:34.921 [645/705] Linking target app/dpdk-pdump 00:03:34.921 [646/705] Linking target app/dpdk-test-compress-perf 00:03:34.921 [647/705] Linking target app/dpdk-test-crypto-perf 00:03:35.178 [648/705] Linking target app/dpdk-test-cmdline 00:03:35.179 [649/705] Linking target app/dpdk-test-dma-perf 00:03:35.179 [650/705] Linking target app/dpdk-test-fib 00:03:35.179 [651/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:35.179 [652/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:35.436 [653/705] Linking target app/dpdk-test-gpudev 00:03:35.436 [654/705] Linking target app/dpdk-test-flow-perf 00:03:35.436 [655/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:35.436 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:35.694 [657/705] Linking target app/dpdk-test-eventdev 00:03:35.694 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:35.694 [659/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:35.694 [660/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:35.694 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:35.694 [662/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:35.694 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:35.952 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:35.952 [665/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:35.952 [666/705] Linking target app/dpdk-test-mldev 00:03:35.952 [667/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:35.952 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:35.952 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:36.210 [670/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:36.210 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:36.210 [672/705] Linking target app/dpdk-test-bbdev 00:03:36.210 [673/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.210 [674/705] Linking target lib/librte_pipeline.so.24.0 00:03:36.210 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:36.468 [676/705] Linking target app/dpdk-test-pipeline 00:03:36.468 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:36.468 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:36.725 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:36.725 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:36.725 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:36.725 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:36.983 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:36.983 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:36.983 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:36.983 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:37.240 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:37.240 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:37.240 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:37.240 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:37.497 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:37.497 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:37.762 [693/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:37.762 [694/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:37.762 [695/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:37.762 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:37.762 [697/705] Linking target app/dpdk-test-sad 00:03:38.021 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:38.021 [699/705] Linking target app/dpdk-test-regex 00:03:38.021 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:38.278 [701/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:38.278 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:38.536 [703/705] Linking target app/dpdk-test-security-perf 00:03:38.536 [704/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:38.794 [705/705] Linking target app/dpdk-testpmd 00:03:38.794 21:59:45 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:38.794 21:59:45 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:38.794 21:59:45 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:39.057 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:39.057 [0/1] Installing files. 00:03:39.321 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.321 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.322 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:39.323 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.324 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:39.325 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:39.325 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.325 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.326 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.602 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.602 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.602 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.602 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.602 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.602 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.603 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.604 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.604 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:39.604 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:39.604 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:39.604 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:39.604 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:39.604 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:39.604 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:39.604 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:39.604 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:39.604 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:39.604 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:39.604 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:39.604 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:39.604 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:39.604 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:39.604 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:39.604 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:39.604 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:39.604 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:39.604 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:39.604 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:39.604 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:39.604 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:39.604 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:39.604 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:39.604 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:39.604 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:39.604 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:39.604 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:39.604 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:39.604 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:39.604 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:39.604 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:39.604 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:39.604 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:39.604 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:39.604 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:39.604 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:39.604 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:39.604 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:39.604 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:39.604 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:39.604 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:39.604 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:39.604 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:39.604 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:39.604 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:39.604 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:39.604 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:39.604 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:39.604 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:39.604 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:39.604 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:39.604 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:39.604 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:39.604 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:39.604 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:39.604 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:39.604 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:39.604 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:39.604 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:39.604 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:39.604 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:39.604 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:39.604 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:39.604 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:39.604 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:39.604 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:39.604 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:39.604 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:39.604 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:39.604 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:39.604 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:39.604 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:39.604 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:39.604 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:39.604 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:39.604 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:39.604 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:39.604 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:39.604 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:39.604 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:39.604 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:39.604 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:39.604 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:39.604 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:39.604 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:39.604 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:39.604 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:39.604 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:39.604 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:39.604 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:39.604 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:39.604 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:39.604 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:39.604 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:39.604 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:39.604 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:39.604 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:39.604 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:39.604 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:39.604 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:39.604 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:39.604 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:39.604 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:39.604 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:39.604 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:39.604 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:39.604 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:39.604 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:39.604 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:39.604 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:39.604 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:39.604 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:39.604 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:39.604 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:39.604 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:39.604 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:39.604 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:39.604 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:39.604 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:39.604 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:39.604 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:39.604 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:39.604 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:39.604 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:39.604 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:39.604 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:39.604 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:39.604 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:39.605 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:39.605 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:39.605 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:39.605 21:59:45 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:39.605 21:59:45 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:39.605 00:03:39.605 real 0m35.741s 00:03:39.605 user 4m6.908s 00:03:39.605 sys 0m35.409s 00:03:39.605 21:59:45 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:39.605 21:59:45 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:39.605 ************************************ 00:03:39.605 END TEST build_native_dpdk 00:03:39.605 ************************************ 00:03:39.605 21:59:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:39.605 21:59:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:39.605 21:59:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:39.861 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:39.861 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.862 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:39.862 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:40.119 Using 'verbs' RDMA provider 00:03:51.023 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:00.992 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:00.992 Creating mk/config.mk...done. 00:04:00.992 Creating mk/cc.flags.mk...done. 00:04:00.992 Type 'make' to build. 00:04:00.992 22:00:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:00.992 22:00:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:00.992 22:00:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:00.992 22:00:06 -- common/autotest_common.sh@10 -- $ set +x 00:04:00.992 ************************************ 00:04:00.992 START TEST make 00:04:00.992 ************************************ 00:04:00.992 22:00:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:00.992 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:00.992 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:00.992 meson setup builddir \ 00:04:00.992 -Dwith-libaio=enabled \ 00:04:00.992 -Dwith-liburing=enabled \ 00:04:00.992 -Dwith-libvfn=disabled \ 00:04:00.992 -Dwith-spdk=disabled \ 00:04:00.992 -Dexamples=false \ 00:04:00.992 -Dtests=false \ 00:04:00.992 -Dtools=false && \ 00:04:00.992 meson compile -C builddir && \ 00:04:00.992 cd -) 00:04:02.894 The Meson build system 00:04:02.894 Version: 1.5.0 00:04:02.894 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:02.894 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:02.894 Build type: native build 00:04:02.894 Project name: xnvme 00:04:02.894 Project version: 0.7.5 00:04:02.894 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:02.894 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:02.894 Host machine cpu family: x86_64 00:04:02.894 Host machine cpu: x86_64 00:04:02.894 Message: host_machine.system: linux 00:04:02.894 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:02.894 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:02.894 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:02.894 Run-time dependency threads found: YES 00:04:02.894 Has header "setupapi.h" : NO 00:04:02.894 Has header "linux/blkzoned.h" : YES 00:04:02.894 Has header "linux/blkzoned.h" : YES (cached) 00:04:02.894 Has header "libaio.h" : YES 00:04:02.894 Library aio found: YES 00:04:02.894 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:02.894 Run-time dependency liburing found: YES 2.2 00:04:02.894 Dependency libvfn skipped: feature with-libvfn disabled 00:04:02.894 Found CMake: /usr/bin/cmake (3.27.7) 00:04:02.894 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:02.894 Subproject spdk : skipped: feature with-spdk disabled 00:04:02.894 Run-time dependency appleframeworks found: NO (tried framework) 00:04:02.894 Run-time dependency appleframeworks found: NO (tried framework) 00:04:02.894 Library rt found: YES 00:04:02.894 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:02.894 Configuring xnvme_config.h using configuration 00:04:02.894 Configuring xnvme.spec using configuration 00:04:02.894 Run-time dependency bash-completion found: YES 2.11 00:04:02.894 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:02.894 Program cp found: YES (/usr/bin/cp) 00:04:02.894 Build targets in project: 3 00:04:02.894 00:04:02.894 xnvme 0.7.5 00:04:02.894 00:04:02.894 Subprojects 00:04:02.894 spdk : NO Feature 'with-spdk' disabled 00:04:02.894 00:04:02.894 User defined options 00:04:02.894 examples : false 00:04:02.894 tests : false 00:04:02.894 tools : false 00:04:02.894 with-libaio : enabled 00:04:02.894 with-liburing: enabled 00:04:02.894 with-libvfn : disabled 00:04:02.894 with-spdk : disabled 00:04:02.894 00:04:02.894 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:03.153 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:03.153 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:03.153 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:03.153 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:03.153 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:03.153 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:03.153 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:03.153 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:03.153 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:03.153 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:03.153 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:03.153 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:03.153 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:03.153 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:03.153 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:03.153 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:03.153 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:03.412 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:03.412 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:03.412 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:03.412 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:03.412 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:03.412 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:03.412 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:03.412 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:03.412 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:03.412 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:03.412 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:03.412 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:03.412 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:03.412 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:03.412 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:03.412 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:03.412 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:03.412 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:03.412 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:03.412 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:03.412 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:03.412 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:03.412 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:03.412 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:03.412 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:03.412 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:03.412 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:03.412 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:03.412 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:03.412 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:03.412 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:03.412 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:03.412 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:03.412 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:03.412 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:03.412 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:03.670 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:03.670 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:03.670 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:03.670 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:03.670 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:03.670 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:03.670 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:03.670 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:03.670 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:03.670 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:03.670 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:03.670 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:03.670 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:03.670 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:03.670 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:03.670 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:03.670 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:03.670 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:03.670 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:03.928 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:03.928 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:04.188 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:04.188 [75/76] Linking static target lib/libxnvme.a 00:04:04.188 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:04.188 INFO: autodetecting backend as ninja 00:04:04.188 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:04.188 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:36.325 CC lib/ut/ut.o 00:04:36.325 CC lib/log/log_deprecated.o 00:04:36.325 CC lib/log/log_flags.o 00:04:36.325 CC lib/log/log.o 00:04:36.325 CC lib/ut_mock/mock.o 00:04:36.325 LIB libspdk_ut_mock.a 00:04:36.325 LIB libspdk_ut.a 00:04:36.325 LIB libspdk_log.a 00:04:36.325 SO libspdk_ut_mock.so.6.0 00:04:36.325 SO libspdk_ut.so.2.0 00:04:36.325 SO libspdk_log.so.7.1 00:04:36.325 SYMLINK libspdk_ut_mock.so 00:04:36.325 SYMLINK libspdk_ut.so 00:04:36.325 SYMLINK libspdk_log.so 00:04:36.325 CC lib/dma/dma.o 00:04:36.325 CXX lib/trace_parser/trace.o 00:04:36.325 CC lib/util/base64.o 00:04:36.325 CC lib/util/bit_array.o 00:04:36.325 CC lib/util/crc16.o 00:04:36.325 CC lib/util/cpuset.o 00:04:36.325 CC lib/util/crc32c.o 00:04:36.325 CC lib/ioat/ioat.o 00:04:36.325 CC lib/util/crc32.o 00:04:36.325 CC lib/vfio_user/host/vfio_user_pci.o 00:04:36.325 CC lib/util/crc32_ieee.o 00:04:36.325 CC lib/util/crc64.o 00:04:36.325 CC lib/util/dif.o 00:04:36.325 CC lib/vfio_user/host/vfio_user.o 00:04:36.325 CC lib/util/fd.o 00:04:36.325 LIB libspdk_dma.a 00:04:36.325 CC lib/util/fd_group.o 00:04:36.325 SO libspdk_dma.so.5.0 00:04:36.325 CC lib/util/file.o 00:04:36.325 SYMLINK libspdk_dma.so 00:04:36.325 CC lib/util/hexlify.o 00:04:36.325 CC lib/util/iov.o 00:04:36.325 CC lib/util/math.o 00:04:36.325 LIB libspdk_ioat.a 00:04:36.325 SO libspdk_ioat.so.7.0 00:04:36.325 CC lib/util/net.o 00:04:36.325 SYMLINK libspdk_ioat.so 00:04:36.325 CC lib/util/pipe.o 00:04:36.325 LIB libspdk_vfio_user.a 00:04:36.325 CC lib/util/strerror_tls.o 00:04:36.325 SO libspdk_vfio_user.so.5.0 00:04:36.325 CC lib/util/string.o 00:04:36.325 CC lib/util/uuid.o 00:04:36.325 CC lib/util/xor.o 00:04:36.325 SYMLINK libspdk_vfio_user.so 00:04:36.325 CC lib/util/zipf.o 00:04:36.325 CC lib/util/md5.o 00:04:36.325 LIB libspdk_util.a 00:04:36.325 SO libspdk_util.so.10.1 00:04:36.325 LIB libspdk_trace_parser.a 00:04:36.325 SO libspdk_trace_parser.so.6.0 00:04:36.325 SYMLINK libspdk_util.so 00:04:36.325 SYMLINK libspdk_trace_parser.so 00:04:36.325 CC lib/idxd/idxd.o 00:04:36.325 CC lib/idxd/idxd_user.o 00:04:36.325 CC lib/idxd/idxd_kernel.o 00:04:36.325 CC lib/env_dpdk/env.o 00:04:36.325 CC lib/env_dpdk/memory.o 00:04:36.325 CC lib/env_dpdk/pci.o 00:04:36.325 CC lib/rdma_utils/rdma_utils.o 00:04:36.325 CC lib/json/json_parse.o 00:04:36.325 CC lib/vmd/vmd.o 00:04:36.325 CC lib/conf/conf.o 00:04:36.325 CC lib/vmd/led.o 00:04:36.325 CC lib/json/json_util.o 00:04:36.325 CC lib/json/json_write.o 00:04:36.325 LIB libspdk_conf.a 00:04:36.325 SO libspdk_conf.so.6.0 00:04:36.325 LIB libspdk_rdma_utils.a 00:04:36.325 SO libspdk_rdma_utils.so.1.0 00:04:36.325 CC lib/env_dpdk/init.o 00:04:36.325 SYMLINK libspdk_conf.so 00:04:36.325 CC lib/env_dpdk/threads.o 00:04:36.325 SYMLINK libspdk_rdma_utils.so 00:04:36.325 CC lib/env_dpdk/pci_ioat.o 00:04:36.325 CC lib/env_dpdk/pci_virtio.o 00:04:36.325 CC lib/env_dpdk/pci_vmd.o 00:04:36.325 CC lib/env_dpdk/pci_idxd.o 00:04:36.325 CC lib/env_dpdk/pci_event.o 00:04:36.325 CC lib/env_dpdk/sigbus_handler.o 00:04:36.325 LIB libspdk_json.a 00:04:36.325 CC lib/env_dpdk/pci_dpdk.o 00:04:36.325 SO libspdk_json.so.6.0 00:04:36.325 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:36.325 CC lib/rdma_provider/common.o 00:04:36.325 SYMLINK libspdk_json.so 00:04:36.325 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:36.325 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:36.325 LIB libspdk_vmd.a 00:04:36.325 LIB libspdk_idxd.a 00:04:36.325 SO libspdk_vmd.so.6.0 00:04:36.325 SO libspdk_idxd.so.12.1 00:04:36.325 SYMLINK libspdk_vmd.so 00:04:36.325 SYMLINK libspdk_idxd.so 00:04:36.325 LIB libspdk_rdma_provider.a 00:04:36.325 CC lib/jsonrpc/jsonrpc_server.o 00:04:36.325 CC lib/jsonrpc/jsonrpc_client.o 00:04:36.325 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:36.325 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:36.325 SO libspdk_rdma_provider.so.7.0 00:04:36.325 SYMLINK libspdk_rdma_provider.so 00:04:36.325 LIB libspdk_jsonrpc.a 00:04:36.325 SO libspdk_jsonrpc.so.6.0 00:04:36.325 SYMLINK libspdk_jsonrpc.so 00:04:36.325 CC lib/rpc/rpc.o 00:04:36.325 LIB libspdk_env_dpdk.a 00:04:36.325 SO libspdk_env_dpdk.so.15.1 00:04:36.325 SYMLINK libspdk_env_dpdk.so 00:04:36.325 LIB libspdk_rpc.a 00:04:36.325 SO libspdk_rpc.so.6.0 00:04:36.325 SYMLINK libspdk_rpc.so 00:04:36.325 CC lib/keyring/keyring_rpc.o 00:04:36.325 CC lib/keyring/keyring.o 00:04:36.325 CC lib/trace/trace.o 00:04:36.325 CC lib/trace/trace_flags.o 00:04:36.325 CC lib/notify/notify.o 00:04:36.325 CC lib/notify/notify_rpc.o 00:04:36.325 CC lib/trace/trace_rpc.o 00:04:36.325 LIB libspdk_notify.a 00:04:36.325 SO libspdk_notify.so.6.0 00:04:36.583 LIB libspdk_keyring.a 00:04:36.583 SO libspdk_keyring.so.2.0 00:04:36.583 SYMLINK libspdk_notify.so 00:04:36.583 LIB libspdk_trace.a 00:04:36.583 SYMLINK libspdk_keyring.so 00:04:36.583 SO libspdk_trace.so.11.0 00:04:36.583 SYMLINK libspdk_trace.so 00:04:36.840 CC lib/sock/sock.o 00:04:36.840 CC lib/sock/sock_rpc.o 00:04:36.840 CC lib/thread/thread.o 00:04:36.840 CC lib/thread/iobuf.o 00:04:37.409 LIB libspdk_sock.a 00:04:37.409 SO libspdk_sock.so.10.0 00:04:37.409 SYMLINK libspdk_sock.so 00:04:37.670 CC lib/nvme/nvme_ctrlr.o 00:04:37.670 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:37.670 CC lib/nvme/nvme_ns_cmd.o 00:04:37.670 CC lib/nvme/nvme_fabric.o 00:04:37.670 CC lib/nvme/nvme_ns.o 00:04:37.670 CC lib/nvme/nvme_qpair.o 00:04:37.670 CC lib/nvme/nvme_pcie_common.o 00:04:37.670 CC lib/nvme/nvme_pcie.o 00:04:37.670 CC lib/nvme/nvme.o 00:04:38.239 CC lib/nvme/nvme_quirks.o 00:04:38.239 CC lib/nvme/nvme_transport.o 00:04:38.239 CC lib/nvme/nvme_discovery.o 00:04:38.239 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:38.239 LIB libspdk_thread.a 00:04:38.498 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:38.498 CC lib/nvme/nvme_tcp.o 00:04:38.498 SO libspdk_thread.so.11.0 00:04:38.498 CC lib/nvme/nvme_opal.o 00:04:38.498 SYMLINK libspdk_thread.so 00:04:38.498 CC lib/nvme/nvme_io_msg.o 00:04:38.498 CC lib/nvme/nvme_poll_group.o 00:04:38.498 CC lib/nvme/nvme_zns.o 00:04:38.756 CC lib/nvme/nvme_stubs.o 00:04:38.756 CC lib/nvme/nvme_auth.o 00:04:38.756 CC lib/nvme/nvme_cuse.o 00:04:38.756 CC lib/nvme/nvme_rdma.o 00:04:39.015 CC lib/accel/accel.o 00:04:39.015 CC lib/blob/blobstore.o 00:04:39.015 CC lib/init/json_config.o 00:04:39.275 CC lib/virtio/virtio.o 00:04:39.275 CC lib/init/subsystem.o 00:04:39.275 CC lib/fsdev/fsdev.o 00:04:39.535 CC lib/init/subsystem_rpc.o 00:04:39.535 CC lib/virtio/virtio_vhost_user.o 00:04:39.535 CC lib/virtio/virtio_vfio_user.o 00:04:39.535 CC lib/init/rpc.o 00:04:39.795 CC lib/virtio/virtio_pci.o 00:04:39.795 LIB libspdk_init.a 00:04:39.795 CC lib/blob/request.o 00:04:39.795 SO libspdk_init.so.6.0 00:04:39.795 CC lib/fsdev/fsdev_io.o 00:04:39.795 SYMLINK libspdk_init.so 00:04:39.795 CC lib/accel/accel_rpc.o 00:04:39.795 CC lib/blob/zeroes.o 00:04:39.795 CC lib/fsdev/fsdev_rpc.o 00:04:40.056 CC lib/blob/blob_bs_dev.o 00:04:40.056 LIB libspdk_virtio.a 00:04:40.056 CC lib/accel/accel_sw.o 00:04:40.056 SO libspdk_virtio.so.7.0 00:04:40.056 SYMLINK libspdk_virtio.so 00:04:40.056 LIB libspdk_fsdev.a 00:04:40.056 LIB libspdk_nvme.a 00:04:40.056 SO libspdk_fsdev.so.2.0 00:04:40.056 CC lib/event/app.o 00:04:40.056 CC lib/event/log_rpc.o 00:04:40.056 CC lib/event/reactor.o 00:04:40.056 CC lib/event/app_rpc.o 00:04:40.056 CC lib/event/scheduler_static.o 00:04:40.317 SYMLINK libspdk_fsdev.so 00:04:40.317 LIB libspdk_accel.a 00:04:40.317 SO libspdk_nvme.so.15.0 00:04:40.317 SO libspdk_accel.so.16.0 00:04:40.317 SYMLINK libspdk_accel.so 00:04:40.317 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:40.578 SYMLINK libspdk_nvme.so 00:04:40.578 CC lib/bdev/bdev.o 00:04:40.578 CC lib/bdev/bdev_rpc.o 00:04:40.578 CC lib/bdev/bdev_zone.o 00:04:40.578 CC lib/bdev/scsi_nvme.o 00:04:40.578 CC lib/bdev/part.o 00:04:40.578 LIB libspdk_event.a 00:04:40.578 SO libspdk_event.so.14.0 00:04:40.837 SYMLINK libspdk_event.so 00:04:40.837 LIB libspdk_fuse_dispatcher.a 00:04:40.837 SO libspdk_fuse_dispatcher.so.1.0 00:04:41.096 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.663 LIB libspdk_blob.a 00:04:41.663 SO libspdk_blob.so.12.0 00:04:41.921 SYMLINK libspdk_blob.so 00:04:41.921 CC lib/blobfs/tree.o 00:04:41.921 CC lib/blobfs/blobfs.o 00:04:41.921 CC lib/lvol/lvol.o 00:04:42.858 LIB libspdk_blobfs.a 00:04:42.858 SO libspdk_blobfs.so.11.0 00:04:42.858 SYMLINK libspdk_blobfs.so 00:04:42.858 LIB libspdk_bdev.a 00:04:42.858 LIB libspdk_lvol.a 00:04:42.858 SO libspdk_lvol.so.11.0 00:04:42.858 SO libspdk_bdev.so.17.0 00:04:42.858 SYMLINK libspdk_lvol.so 00:04:42.858 SYMLINK libspdk_bdev.so 00:04:43.116 CC lib/nvmf/ctrlr.o 00:04:43.116 CC lib/nvmf/ctrlr_discovery.o 00:04:43.116 CC lib/nvmf/ctrlr_bdev.o 00:04:43.116 CC lib/nvmf/subsystem.o 00:04:43.116 CC lib/nvmf/nvmf_rpc.o 00:04:43.116 CC lib/scsi/dev.o 00:04:43.116 CC lib/nvmf/nvmf.o 00:04:43.116 CC lib/ftl/ftl_core.o 00:04:43.116 CC lib/ublk/ublk.o 00:04:43.116 CC lib/nbd/nbd.o 00:04:43.374 CC lib/scsi/lun.o 00:04:43.374 CC lib/ftl/ftl_init.o 00:04:43.374 CC lib/nbd/nbd_rpc.o 00:04:43.632 CC lib/ftl/ftl_layout.o 00:04:43.632 CC lib/scsi/port.o 00:04:43.632 CC lib/ublk/ublk_rpc.o 00:04:43.632 CC lib/ftl/ftl_debug.o 00:04:43.632 LIB libspdk_nbd.a 00:04:43.632 SO libspdk_nbd.so.7.0 00:04:43.632 CC lib/scsi/scsi.o 00:04:43.632 LIB libspdk_ublk.a 00:04:43.632 SYMLINK libspdk_nbd.so 00:04:43.632 CC lib/scsi/scsi_bdev.o 00:04:43.632 SO libspdk_ublk.so.3.0 00:04:43.632 CC lib/nvmf/transport.o 00:04:43.632 CC lib/ftl/ftl_io.o 00:04:43.890 SYMLINK libspdk_ublk.so 00:04:43.890 CC lib/ftl/ftl_sb.o 00:04:43.890 CC lib/ftl/ftl_l2p.o 00:04:43.890 CC lib/nvmf/tcp.o 00:04:43.890 CC lib/nvmf/stubs.o 00:04:43.890 CC lib/nvmf/mdns_server.o 00:04:43.890 CC lib/ftl/ftl_l2p_flat.o 00:04:43.890 CC lib/ftl/ftl_nv_cache.o 00:04:43.890 CC lib/scsi/scsi_pr.o 00:04:44.150 CC lib/ftl/ftl_band.o 00:04:44.150 CC lib/scsi/scsi_rpc.o 00:04:44.150 CC lib/nvmf/rdma.o 00:04:44.150 CC lib/nvmf/auth.o 00:04:44.150 CC lib/ftl/ftl_band_ops.o 00:04:44.409 CC lib/scsi/task.o 00:04:44.409 CC lib/ftl/ftl_writer.o 00:04:44.409 CC lib/ftl/ftl_rq.o 00:04:44.409 CC lib/ftl/ftl_reloc.o 00:04:44.409 CC lib/ftl/ftl_l2p_cache.o 00:04:44.667 LIB libspdk_scsi.a 00:04:44.667 CC lib/ftl/ftl_p2l.o 00:04:44.667 SO libspdk_scsi.so.9.0 00:04:44.667 CC lib/ftl/ftl_p2l_log.o 00:04:44.667 SYMLINK libspdk_scsi.so 00:04:44.667 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.926 CC lib/iscsi/conn.o 00:04:44.926 CC lib/iscsi/init_grp.o 00:04:44.926 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:44.926 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:44.926 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:44.926 CC lib/iscsi/iscsi.o 00:04:45.184 CC lib/iscsi/param.o 00:04:45.184 CC lib/vhost/vhost.o 00:04:45.184 CC lib/vhost/vhost_rpc.o 00:04:45.184 CC lib/vhost/vhost_scsi.o 00:04:45.184 CC lib/vhost/vhost_blk.o 00:04:45.184 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.442 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.442 CC lib/iscsi/portal_grp.o 00:04:45.442 CC lib/iscsi/tgt_node.o 00:04:45.442 CC lib/iscsi/iscsi_subsystem.o 00:04:45.442 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.700 CC lib/vhost/rte_vhost_user.o 00:04:45.700 CC lib/iscsi/iscsi_rpc.o 00:04:45.700 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.700 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.700 CC lib/iscsi/task.o 00:04:45.700 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.958 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.958 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.958 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.958 CC lib/ftl/utils/ftl_conf.o 00:04:45.958 CC lib/ftl/utils/ftl_md.o 00:04:45.958 CC lib/ftl/utils/ftl_mempool.o 00:04:45.958 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.958 LIB libspdk_iscsi.a 00:04:45.958 CC lib/ftl/utils/ftl_property.o 00:04:45.958 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.958 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.216 SO libspdk_iscsi.so.8.0 00:04:46.216 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.216 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.216 LIB libspdk_nvmf.a 00:04:46.216 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.216 SYMLINK libspdk_iscsi.so 00:04:46.216 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.216 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.216 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.216 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.216 SO libspdk_nvmf.so.20.0 00:04:46.216 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.216 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.216 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.475 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.475 LIB libspdk_vhost.a 00:04:46.475 CC lib/ftl/base/ftl_base_dev.o 00:04:46.475 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.475 CC lib/ftl/ftl_trace.o 00:04:46.475 SO libspdk_vhost.so.8.0 00:04:46.475 SYMLINK libspdk_nvmf.so 00:04:46.475 SYMLINK libspdk_vhost.so 00:04:46.475 LIB libspdk_ftl.a 00:04:46.733 SO libspdk_ftl.so.9.0 00:04:46.991 SYMLINK libspdk_ftl.so 00:04:47.249 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.249 CC module/accel/ioat/accel_ioat.o 00:04:47.249 CC module/accel/dsa/accel_dsa.o 00:04:47.249 CC module/blob/bdev/blob_bdev.o 00:04:47.249 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.249 CC module/accel/iaa/accel_iaa.o 00:04:47.249 CC module/fsdev/aio/fsdev_aio.o 00:04:47.249 CC module/accel/error/accel_error.o 00:04:47.249 CC module/keyring/file/keyring.o 00:04:47.249 CC module/sock/posix/posix.o 00:04:47.249 LIB libspdk_env_dpdk_rpc.a 00:04:47.249 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.508 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.508 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.508 CC module/keyring/file/keyring_rpc.o 00:04:47.508 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.508 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.508 LIB libspdk_scheduler_dynamic.a 00:04:47.508 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.508 CC module/accel/error/accel_error_rpc.o 00:04:47.508 LIB libspdk_keyring_file.a 00:04:47.508 LIB libspdk_accel_ioat.a 00:04:47.508 SO libspdk_keyring_file.so.2.0 00:04:47.508 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.508 SO libspdk_accel_ioat.so.6.0 00:04:47.508 LIB libspdk_blob_bdev.a 00:04:47.508 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.508 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.508 SO libspdk_blob_bdev.so.12.0 00:04:47.508 LIB libspdk_accel_iaa.a 00:04:47.508 SYMLINK libspdk_accel_ioat.so 00:04:47.508 SYMLINK libspdk_keyring_file.so 00:04:47.508 SO libspdk_accel_iaa.so.3.0 00:04:47.508 LIB libspdk_accel_error.a 00:04:47.508 SYMLINK libspdk_blob_bdev.so 00:04:47.767 SO libspdk_accel_error.so.2.0 00:04:47.767 SYMLINK libspdk_accel_iaa.so 00:04:47.767 LIB libspdk_accel_dsa.a 00:04:47.767 SYMLINK libspdk_accel_error.so 00:04:47.767 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.767 SO libspdk_accel_dsa.so.5.0 00:04:47.767 CC module/keyring/linux/keyring.o 00:04:47.768 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.768 SYMLINK libspdk_accel_dsa.so 00:04:47.768 CC module/keyring/linux/keyring_rpc.o 00:04:47.768 LIB libspdk_fsdev_aio.a 00:04:47.768 CC module/bdev/error/vbdev_error.o 00:04:47.768 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.768 CC module/bdev/delay/vbdev_delay.o 00:04:47.768 SO libspdk_fsdev_aio.so.1.0 00:04:47.768 CC module/bdev/gpt/gpt.o 00:04:47.768 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.768 CC module/blobfs/bdev/blobfs_bdev.o 00:04:47.768 LIB libspdk_scheduler_gscheduler.a 00:04:47.768 LIB libspdk_keyring_linux.a 00:04:48.026 SO libspdk_scheduler_gscheduler.so.4.0 00:04:48.026 SO libspdk_keyring_linux.so.1.0 00:04:48.026 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:48.026 SYMLINK libspdk_fsdev_aio.so 00:04:48.026 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.026 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.026 SYMLINK libspdk_scheduler_gscheduler.so 00:04:48.026 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.026 SYMLINK libspdk_keyring_linux.so 00:04:48.026 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.026 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.026 LIB libspdk_sock_posix.a 00:04:48.026 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.026 SO libspdk_sock_posix.so.6.0 00:04:48.026 LIB libspdk_bdev_error.a 00:04:48.026 LIB libspdk_blobfs_bdev.a 00:04:48.026 CC module/bdev/malloc/bdev_malloc.o 00:04:48.026 SO libspdk_bdev_error.so.6.0 00:04:48.026 SO libspdk_blobfs_bdev.so.6.0 00:04:48.026 SYMLINK libspdk_sock_posix.so 00:04:48.027 CC module/bdev/null/bdev_null.o 00:04:48.027 CC module/bdev/null/bdev_null_rpc.o 00:04:48.286 SYMLINK libspdk_bdev_error.so 00:04:48.286 SYMLINK libspdk_blobfs_bdev.so 00:04:48.286 LIB libspdk_bdev_gpt.a 00:04:48.286 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.286 SO libspdk_bdev_gpt.so.6.0 00:04:48.286 LIB libspdk_bdev_delay.a 00:04:48.286 SO libspdk_bdev_delay.so.6.0 00:04:48.286 SYMLINK libspdk_bdev_gpt.so 00:04:48.286 SYMLINK libspdk_bdev_delay.so 00:04:48.286 LIB libspdk_bdev_null.a 00:04:48.286 CC module/bdev/nvme/bdev_nvme.o 00:04:48.286 SO libspdk_bdev_null.so.6.0 00:04:48.286 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.286 LIB libspdk_bdev_lvol.a 00:04:48.286 SO libspdk_bdev_lvol.so.6.0 00:04:48.544 SYMLINK libspdk_bdev_null.so 00:04:48.544 CC module/bdev/raid/bdev_raid.o 00:04:48.544 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.544 CC module/bdev/split/vbdev_split.o 00:04:48.544 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.544 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.544 LIB libspdk_bdev_malloc.a 00:04:48.544 CC module/bdev/aio/bdev_aio.o 00:04:48.544 SYMLINK libspdk_bdev_lvol.so 00:04:48.544 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:48.544 SO libspdk_bdev_malloc.so.6.0 00:04:48.544 SYMLINK libspdk_bdev_malloc.so 00:04:48.544 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.544 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.544 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.544 CC module/bdev/raid/raid0.o 00:04:48.544 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.544 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.801 LIB libspdk_bdev_xnvme.a 00:04:48.801 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.801 SO libspdk_bdev_xnvme.so.3.0 00:04:48.801 LIB libspdk_bdev_split.a 00:04:48.801 SO libspdk_bdev_split.so.6.0 00:04:48.801 SYMLINK libspdk_bdev_xnvme.so 00:04:48.801 CC module/bdev/nvme/nvme_rpc.o 00:04:48.801 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.801 LIB libspdk_bdev_aio.a 00:04:48.801 SYMLINK libspdk_bdev_split.so 00:04:48.801 CC module/bdev/nvme/vbdev_opal.o 00:04:48.801 LIB libspdk_bdev_zone_block.a 00:04:48.801 LIB libspdk_bdev_passthru.a 00:04:48.801 SO libspdk_bdev_aio.so.6.0 00:04:48.801 SO libspdk_bdev_zone_block.so.6.0 00:04:48.801 SO libspdk_bdev_passthru.so.6.0 00:04:48.801 CC module/bdev/raid/raid1.o 00:04:48.801 SYMLINK libspdk_bdev_aio.so 00:04:48.801 CC module/bdev/raid/concat.o 00:04:48.801 SYMLINK libspdk_bdev_zone_block.so 00:04:48.801 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:48.801 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:48.801 SYMLINK libspdk_bdev_passthru.so 00:04:49.064 CC module/bdev/ftl/bdev_ftl.o 00:04:49.064 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:49.064 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.064 CC module/bdev/iscsi/bdev_iscsi.o 00:04:49.064 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:49.064 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.064 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.324 LIB libspdk_bdev_ftl.a 00:04:49.324 SO libspdk_bdev_ftl.so.6.0 00:04:49.324 LIB libspdk_bdev_iscsi.a 00:04:49.324 SYMLINK libspdk_bdev_ftl.so 00:04:49.324 SO libspdk_bdev_iscsi.so.6.0 00:04:49.582 LIB libspdk_bdev_raid.a 00:04:49.582 SYMLINK libspdk_bdev_iscsi.so 00:04:49.582 SO libspdk_bdev_raid.so.6.0 00:04:49.582 SYMLINK libspdk_bdev_raid.so 00:04:49.582 LIB libspdk_bdev_virtio.a 00:04:49.582 SO libspdk_bdev_virtio.so.6.0 00:04:49.842 SYMLINK libspdk_bdev_virtio.so 00:04:50.408 LIB libspdk_bdev_nvme.a 00:04:50.408 SO libspdk_bdev_nvme.so.7.1 00:04:50.667 SYMLINK libspdk_bdev_nvme.so 00:04:50.927 CC module/event/subsystems/keyring/keyring.o 00:04:50.927 CC module/event/subsystems/vmd/vmd.o 00:04:50.927 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.927 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.927 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.927 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.927 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.927 CC module/event/subsystems/sock/sock.o 00:04:50.927 CC module/event/subsystems/fsdev/fsdev.o 00:04:51.188 LIB libspdk_event_fsdev.a 00:04:51.188 LIB libspdk_event_vhost_blk.a 00:04:51.188 LIB libspdk_event_keyring.a 00:04:51.188 LIB libspdk_event_iobuf.a 00:04:51.188 LIB libspdk_event_vmd.a 00:04:51.188 LIB libspdk_event_scheduler.a 00:04:51.188 LIB libspdk_event_sock.a 00:04:51.188 SO libspdk_event_fsdev.so.1.0 00:04:51.188 SO libspdk_event_vhost_blk.so.3.0 00:04:51.188 SO libspdk_event_keyring.so.1.0 00:04:51.188 SO libspdk_event_scheduler.so.4.0 00:04:51.188 SO libspdk_event_iobuf.so.3.0 00:04:51.188 SO libspdk_event_sock.so.5.0 00:04:51.188 SO libspdk_event_vmd.so.6.0 00:04:51.188 SYMLINK libspdk_event_fsdev.so 00:04:51.188 SYMLINK libspdk_event_vhost_blk.so 00:04:51.188 SYMLINK libspdk_event_keyring.so 00:04:51.188 SYMLINK libspdk_event_scheduler.so 00:04:51.188 SYMLINK libspdk_event_vmd.so 00:04:51.188 SYMLINK libspdk_event_sock.so 00:04:51.188 SYMLINK libspdk_event_iobuf.so 00:04:51.449 CC module/event/subsystems/accel/accel.o 00:04:51.709 LIB libspdk_event_accel.a 00:04:51.709 SO libspdk_event_accel.so.6.0 00:04:51.709 SYMLINK libspdk_event_accel.so 00:04:51.969 CC module/event/subsystems/bdev/bdev.o 00:04:51.969 LIB libspdk_event_bdev.a 00:04:52.229 SO libspdk_event_bdev.so.6.0 00:04:52.229 SYMLINK libspdk_event_bdev.so 00:04:52.229 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.229 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.502 CC module/event/subsystems/nbd/nbd.o 00:04:52.502 CC module/event/subsystems/ublk/ublk.o 00:04:52.502 CC module/event/subsystems/scsi/scsi.o 00:04:52.502 LIB libspdk_event_nbd.a 00:04:52.502 LIB libspdk_event_ublk.a 00:04:52.502 SO libspdk_event_ublk.so.3.0 00:04:52.502 LIB libspdk_event_scsi.a 00:04:52.502 SO libspdk_event_nbd.so.6.0 00:04:52.502 SO libspdk_event_scsi.so.6.0 00:04:52.502 SYMLINK libspdk_event_ublk.so 00:04:52.502 SYMLINK libspdk_event_nbd.so 00:04:52.502 LIB libspdk_event_nvmf.a 00:04:52.502 SYMLINK libspdk_event_scsi.so 00:04:52.502 SO libspdk_event_nvmf.so.6.0 00:04:52.798 SYMLINK libspdk_event_nvmf.so 00:04:52.798 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.798 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.798 LIB libspdk_event_vhost_scsi.a 00:04:52.798 LIB libspdk_event_iscsi.a 00:04:53.065 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.065 SO libspdk_event_iscsi.so.6.0 00:04:53.065 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.065 SYMLINK libspdk_event_iscsi.so 00:04:53.065 SO libspdk.so.6.0 00:04:53.065 SYMLINK libspdk.so 00:04:53.327 CXX app/trace/trace.o 00:04:53.327 CC app/trace_record/trace_record.o 00:04:53.327 CC app/spdk_lspci/spdk_lspci.o 00:04:53.327 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.327 CC app/nvmf_tgt/nvmf_main.o 00:04:53.327 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.327 CC app/spdk_tgt/spdk_tgt.o 00:04:53.327 CC examples/ioat/perf/perf.o 00:04:53.327 CC examples/util/zipf/zipf.o 00:04:53.327 CC test/thread/poller_perf/poller_perf.o 00:04:53.586 LINK spdk_lspci 00:04:53.586 LINK nvmf_tgt 00:04:53.586 LINK interrupt_tgt 00:04:53.586 LINK spdk_tgt 00:04:53.586 LINK iscsi_tgt 00:04:53.586 LINK poller_perf 00:04:53.586 LINK ioat_perf 00:04:53.586 LINK zipf 00:04:53.586 LINK spdk_trace_record 00:04:53.586 LINK spdk_trace 00:04:53.586 CC examples/ioat/verify/verify.o 00:04:53.847 CC app/spdk_nvme_perf/perf.o 00:04:53.847 CC app/spdk_nvme_identify/identify.o 00:04:53.847 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.847 CC app/spdk_top/spdk_top.o 00:04:53.847 TEST_HEADER include/spdk/accel.h 00:04:53.847 TEST_HEADER include/spdk/accel_module.h 00:04:53.847 TEST_HEADER include/spdk/assert.h 00:04:53.847 TEST_HEADER include/spdk/barrier.h 00:04:53.847 TEST_HEADER include/spdk/base64.h 00:04:53.847 TEST_HEADER include/spdk/bdev.h 00:04:53.847 TEST_HEADER include/spdk/bdev_module.h 00:04:53.847 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.847 TEST_HEADER include/spdk/bit_array.h 00:04:53.847 TEST_HEADER include/spdk/bit_pool.h 00:04:53.847 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.847 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.847 LINK verify 00:04:53.847 TEST_HEADER include/spdk/blobfs.h 00:04:53.847 TEST_HEADER include/spdk/blob.h 00:04:53.847 TEST_HEADER include/spdk/conf.h 00:04:53.847 TEST_HEADER include/spdk/config.h 00:04:53.847 TEST_HEADER include/spdk/cpuset.h 00:04:53.847 TEST_HEADER include/spdk/crc16.h 00:04:53.847 TEST_HEADER include/spdk/crc32.h 00:04:53.847 TEST_HEADER include/spdk/crc64.h 00:04:53.847 TEST_HEADER include/spdk/dif.h 00:04:53.847 TEST_HEADER include/spdk/dma.h 00:04:53.847 TEST_HEADER include/spdk/endian.h 00:04:53.847 CC app/spdk_dd/spdk_dd.o 00:04:53.847 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.847 TEST_HEADER include/spdk/env.h 00:04:53.847 TEST_HEADER include/spdk/event.h 00:04:53.847 TEST_HEADER include/spdk/fd_group.h 00:04:53.847 TEST_HEADER include/spdk/fd.h 00:04:53.847 TEST_HEADER include/spdk/file.h 00:04:53.847 TEST_HEADER include/spdk/fsdev.h 00:04:53.847 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.847 TEST_HEADER include/spdk/ftl.h 00:04:53.847 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.847 TEST_HEADER include/spdk/hexlify.h 00:04:53.847 TEST_HEADER include/spdk/histogram_data.h 00:04:53.847 TEST_HEADER include/spdk/idxd.h 00:04:53.847 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.847 TEST_HEADER include/spdk/init.h 00:04:53.847 TEST_HEADER include/spdk/ioat.h 00:04:53.847 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.847 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.847 TEST_HEADER include/spdk/json.h 00:04:53.847 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.847 TEST_HEADER include/spdk/keyring.h 00:04:53.847 TEST_HEADER include/spdk/keyring_module.h 00:04:53.847 TEST_HEADER include/spdk/likely.h 00:04:53.847 TEST_HEADER include/spdk/log.h 00:04:53.847 TEST_HEADER include/spdk/lvol.h 00:04:53.847 TEST_HEADER include/spdk/md5.h 00:04:53.847 TEST_HEADER include/spdk/memory.h 00:04:53.847 TEST_HEADER include/spdk/mmio.h 00:04:53.847 TEST_HEADER include/spdk/nbd.h 00:04:53.847 TEST_HEADER include/spdk/net.h 00:04:53.847 CC app/fio/nvme/fio_plugin.o 00:04:53.847 TEST_HEADER include/spdk/notify.h 00:04:53.847 TEST_HEADER include/spdk/nvme.h 00:04:53.847 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.847 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.847 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.847 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.847 CC test/dma/test_dma/test_dma.o 00:04:53.847 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.847 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.847 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.847 TEST_HEADER include/spdk/nvmf.h 00:04:53.847 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.847 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.847 TEST_HEADER include/spdk/opal.h 00:04:53.847 TEST_HEADER include/spdk/opal_spec.h 00:04:53.847 TEST_HEADER include/spdk/pci_ids.h 00:04:53.847 TEST_HEADER include/spdk/pipe.h 00:04:53.847 TEST_HEADER include/spdk/queue.h 00:04:53.847 TEST_HEADER include/spdk/reduce.h 00:04:53.847 CC test/app/bdev_svc/bdev_svc.o 00:04:53.847 TEST_HEADER include/spdk/rpc.h 00:04:53.847 TEST_HEADER include/spdk/scheduler.h 00:04:53.847 TEST_HEADER include/spdk/scsi.h 00:04:53.847 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.847 TEST_HEADER include/spdk/sock.h 00:04:53.847 TEST_HEADER include/spdk/stdinc.h 00:04:53.847 TEST_HEADER include/spdk/string.h 00:04:53.847 TEST_HEADER include/spdk/thread.h 00:04:53.847 TEST_HEADER include/spdk/trace.h 00:04:53.847 TEST_HEADER include/spdk/trace_parser.h 00:04:53.847 TEST_HEADER include/spdk/tree.h 00:04:53.847 LINK spdk_nvme_discover 00:04:53.847 TEST_HEADER include/spdk/ublk.h 00:04:53.847 TEST_HEADER include/spdk/util.h 00:04:53.847 TEST_HEADER include/spdk/uuid.h 00:04:53.847 TEST_HEADER include/spdk/version.h 00:04:53.847 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.847 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.847 TEST_HEADER include/spdk/vhost.h 00:04:53.847 TEST_HEADER include/spdk/vmd.h 00:04:53.847 TEST_HEADER include/spdk/xor.h 00:04:53.847 TEST_HEADER include/spdk/zipf.h 00:04:53.847 CXX test/cpp_headers/accel.o 00:04:54.108 LINK bdev_svc 00:04:54.108 LINK spdk_dd 00:04:54.108 CC examples/thread/thread/thread_ex.o 00:04:54.108 CXX test/cpp_headers/accel_module.o 00:04:54.108 CC app/vhost/vhost.o 00:04:54.369 CXX test/cpp_headers/assert.o 00:04:54.369 LINK thread 00:04:54.369 LINK test_dma 00:04:54.369 LINK vhost 00:04:54.369 CC app/fio/bdev/fio_plugin.o 00:04:54.369 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:54.369 LINK spdk_nvme_perf 00:04:54.369 LINK spdk_nvme_identify 00:04:54.369 LINK spdk_nvme 00:04:54.369 CXX test/cpp_headers/barrier.o 00:04:54.629 CC test/app/histogram_perf/histogram_perf.o 00:04:54.629 CXX test/cpp_headers/base64.o 00:04:54.629 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.629 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:54.629 CC examples/sock/hello_world/hello_sock.o 00:04:54.630 LINK histogram_perf 00:04:54.630 CC test/event/event_perf/event_perf.o 00:04:54.630 LINK spdk_top 00:04:54.630 LINK nvme_fuzz 00:04:54.630 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:54.630 CXX test/cpp_headers/bdev.o 00:04:54.630 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.890 CXX test/cpp_headers/bdev_module.o 00:04:54.890 LINK event_perf 00:04:54.890 LINK spdk_bdev 00:04:54.890 CC test/env/vtophys/vtophys.o 00:04:54.890 LINK hello_sock 00:04:54.890 CC test/event/reactor/reactor.o 00:04:54.890 CXX test/cpp_headers/bdev_zone.o 00:04:54.890 CC test/event/reactor_perf/reactor_perf.o 00:04:54.890 LINK vtophys 00:04:55.150 LINK reactor_perf 00:04:55.150 CC test/rpc_client/rpc_client_test.o 00:04:55.150 CC test/nvme/aer/aer.o 00:04:55.150 LINK reactor 00:04:55.150 CXX test/cpp_headers/bit_array.o 00:04:55.150 LINK vhost_fuzz 00:04:55.150 CC examples/vmd/lsvmd/lsvmd.o 00:04:55.150 CC examples/vmd/led/led.o 00:04:55.150 LINK rpc_client_test 00:04:55.150 LINK mem_callbacks 00:04:55.150 CXX test/cpp_headers/bit_pool.o 00:04:55.150 CC test/event/app_repeat/app_repeat.o 00:04:55.411 LINK aer 00:04:55.411 LINK lsvmd 00:04:55.411 CC examples/idxd/perf/perf.o 00:04:55.411 CC test/event/scheduler/scheduler.o 00:04:55.411 LINK led 00:04:55.411 CXX test/cpp_headers/blob_bdev.o 00:04:55.411 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:55.411 LINK app_repeat 00:04:55.411 CC test/accel/dif/dif.o 00:04:55.411 CC test/nvme/reset/reset.o 00:04:55.411 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.411 CC test/nvme/e2edp/nvme_dp.o 00:04:55.411 CC test/nvme/sgl/sgl.o 00:04:55.411 LINK scheduler 00:04:55.673 CXX test/cpp_headers/blobfs.o 00:04:55.673 LINK env_dpdk_post_init 00:04:55.673 LINK idxd_perf 00:04:55.673 LINK reset 00:04:55.673 CXX test/cpp_headers/blob.o 00:04:55.673 CXX test/cpp_headers/conf.o 00:04:55.673 CC test/nvme/overhead/overhead.o 00:04:55.673 CC test/env/memory/memory_ut.o 00:04:55.673 LINK nvme_dp 00:04:55.673 LINK sgl 00:04:55.673 CXX test/cpp_headers/config.o 00:04:55.934 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:55.934 CXX test/cpp_headers/cpuset.o 00:04:55.934 CC test/nvme/startup/startup.o 00:04:55.934 CC test/nvme/err_injection/err_injection.o 00:04:55.934 LINK overhead 00:04:55.934 CC test/nvme/reserve/reserve.o 00:04:55.934 CXX test/cpp_headers/crc16.o 00:04:55.934 LINK iscsi_fuzz 00:04:55.934 CC test/nvme/simple_copy/simple_copy.o 00:04:55.934 LINK startup 00:04:55.934 LINK dif 00:04:55.934 LINK err_injection 00:04:55.934 CXX test/cpp_headers/crc32.o 00:04:55.934 CXX test/cpp_headers/crc64.o 00:04:56.195 LINK hello_fsdev 00:04:56.195 LINK reserve 00:04:56.195 CXX test/cpp_headers/dif.o 00:04:56.195 CXX test/cpp_headers/dma.o 00:04:56.195 CC test/app/jsoncat/jsoncat.o 00:04:56.195 CC test/nvme/connect_stress/connect_stress.o 00:04:56.195 LINK simple_copy 00:04:56.195 CXX test/cpp_headers/endian.o 00:04:56.195 LINK jsoncat 00:04:56.456 CC test/env/pci/pci_ut.o 00:04:56.456 CC examples/accel/perf/accel_perf.o 00:04:56.456 LINK connect_stress 00:04:56.456 CC examples/blob/hello_world/hello_blob.o 00:04:56.456 CXX test/cpp_headers/env_dpdk.o 00:04:56.456 CC test/blobfs/mkfs/mkfs.o 00:04:56.456 CC examples/blob/cli/blobcli.o 00:04:56.456 CC test/app/stub/stub.o 00:04:56.456 CC test/lvol/esnap/esnap.o 00:04:56.456 CXX test/cpp_headers/env.o 00:04:56.456 CC test/nvme/boot_partition/boot_partition.o 00:04:56.456 LINK mkfs 00:04:56.456 LINK stub 00:04:56.456 LINK hello_blob 00:04:56.718 CXX test/cpp_headers/event.o 00:04:56.718 LINK boot_partition 00:04:56.718 CXX test/cpp_headers/fd_group.o 00:04:56.718 LINK pci_ut 00:04:56.718 CC test/nvme/compliance/nvme_compliance.o 00:04:56.718 LINK accel_perf 00:04:56.718 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.718 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.718 LINK memory_ut 00:04:56.978 LINK blobcli 00:04:56.978 CC test/bdev/bdevio/bdevio.o 00:04:56.978 CXX test/cpp_headers/fd.o 00:04:56.978 LINK doorbell_aers 00:04:56.978 CC test/nvme/fdp/fdp.o 00:04:56.978 CXX test/cpp_headers/file.o 00:04:56.978 LINK fused_ordering 00:04:56.978 CC test/nvme/cuse/cuse.o 00:04:56.978 CC examples/nvme/hello_world/hello_world.o 00:04:57.239 CC examples/nvme/reconnect/reconnect.o 00:04:57.239 LINK nvme_compliance 00:04:57.239 CXX test/cpp_headers/fsdev.o 00:04:57.239 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.239 CC examples/nvme/arbitration/arbitration.o 00:04:57.239 CXX test/cpp_headers/fsdev_module.o 00:04:57.239 LINK hello_world 00:04:57.239 LINK bdevio 00:04:57.239 LINK fdp 00:04:57.239 CXX test/cpp_headers/ftl.o 00:04:57.239 CC examples/nvme/hotplug/hotplug.o 00:04:57.501 LINK reconnect 00:04:57.501 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:57.501 CC examples/nvme/abort/abort.o 00:04:57.501 CXX test/cpp_headers/gpt_spec.o 00:04:57.501 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:57.501 LINK hotplug 00:04:57.501 LINK arbitration 00:04:57.501 CXX test/cpp_headers/hexlify.o 00:04:57.501 LINK cmb_copy 00:04:57.762 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.762 LINK pmr_persistence 00:04:57.762 CXX test/cpp_headers/histogram_data.o 00:04:57.762 LINK nvme_manage 00:04:57.762 CXX test/cpp_headers/idxd.o 00:04:57.762 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.762 CXX test/cpp_headers/idxd_spec.o 00:04:57.762 CXX test/cpp_headers/init.o 00:04:57.762 CXX test/cpp_headers/ioat.o 00:04:57.762 CXX test/cpp_headers/ioat_spec.o 00:04:57.762 LINK abort 00:04:57.762 LINK hello_bdev 00:04:57.762 CXX test/cpp_headers/iscsi_spec.o 00:04:57.762 CXX test/cpp_headers/json.o 00:04:57.762 CXX test/cpp_headers/jsonrpc.o 00:04:58.023 CXX test/cpp_headers/keyring.o 00:04:58.023 CXX test/cpp_headers/keyring_module.o 00:04:58.023 CXX test/cpp_headers/likely.o 00:04:58.023 CXX test/cpp_headers/log.o 00:04:58.023 CXX test/cpp_headers/lvol.o 00:04:58.023 CXX test/cpp_headers/md5.o 00:04:58.023 LINK cuse 00:04:58.023 CXX test/cpp_headers/memory.o 00:04:58.023 CXX test/cpp_headers/mmio.o 00:04:58.023 CXX test/cpp_headers/nbd.o 00:04:58.023 CXX test/cpp_headers/net.o 00:04:58.023 CXX test/cpp_headers/notify.o 00:04:58.023 CXX test/cpp_headers/nvme.o 00:04:58.023 CXX test/cpp_headers/nvme_intel.o 00:04:58.023 CXX test/cpp_headers/nvme_ocssd.o 00:04:58.023 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.023 CXX test/cpp_headers/nvme_spec.o 00:04:58.023 CXX test/cpp_headers/nvme_zns.o 00:04:58.282 CXX test/cpp_headers/nvmf_cmd.o 00:04:58.282 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:58.282 CXX test/cpp_headers/nvmf.o 00:04:58.282 CXX test/cpp_headers/nvmf_spec.o 00:04:58.282 CXX test/cpp_headers/nvmf_transport.o 00:04:58.282 CXX test/cpp_headers/opal.o 00:04:58.282 CXX test/cpp_headers/opal_spec.o 00:04:58.282 CXX test/cpp_headers/pci_ids.o 00:04:58.282 CXX test/cpp_headers/pipe.o 00:04:58.282 CXX test/cpp_headers/queue.o 00:04:58.282 LINK bdevperf 00:04:58.282 CXX test/cpp_headers/reduce.o 00:04:58.282 CXX test/cpp_headers/rpc.o 00:04:58.282 CXX test/cpp_headers/scheduler.o 00:04:58.282 CXX test/cpp_headers/scsi.o 00:04:58.282 CXX test/cpp_headers/scsi_spec.o 00:04:58.282 CXX test/cpp_headers/sock.o 00:04:58.282 CXX test/cpp_headers/stdinc.o 00:04:58.543 CXX test/cpp_headers/string.o 00:04:58.543 CXX test/cpp_headers/thread.o 00:04:58.543 CXX test/cpp_headers/trace.o 00:04:58.543 CXX test/cpp_headers/trace_parser.o 00:04:58.543 CXX test/cpp_headers/tree.o 00:04:58.543 CXX test/cpp_headers/ublk.o 00:04:58.543 CXX test/cpp_headers/util.o 00:04:58.543 CXX test/cpp_headers/uuid.o 00:04:58.543 CXX test/cpp_headers/version.o 00:04:58.543 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.543 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.543 CXX test/cpp_headers/vhost.o 00:04:58.543 CXX test/cpp_headers/vmd.o 00:04:58.543 CXX test/cpp_headers/xor.o 00:04:58.543 CC examples/nvmf/nvmf/nvmf.o 00:04:58.543 CXX test/cpp_headers/zipf.o 00:04:58.804 LINK nvmf 00:05:00.717 LINK esnap 00:05:00.978 00:05:00.978 real 1m0.358s 00:05:00.978 user 4m55.345s 00:05:00.978 sys 0m49.712s 00:05:00.978 22:01:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:00.978 ************************************ 00:05:00.978 END TEST make 00:05:00.978 ************************************ 00:05:00.978 22:01:07 make -- common/autotest_common.sh@10 -- $ set +x 00:05:00.978 22:01:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:00.979 22:01:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:00.979 22:01:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:00.979 22:01:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.979 22:01:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:00.979 22:01:07 -- pm/common@44 -- $ pid=5801 00:05:00.979 22:01:07 -- pm/common@50 -- $ kill -TERM 5801 00:05:00.979 22:01:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.979 22:01:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:00.979 22:01:07 -- pm/common@44 -- $ pid=5803 00:05:00.979 22:01:07 -- pm/common@50 -- $ kill -TERM 5803 00:05:00.979 22:01:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:00.979 22:01:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:01.240 22:01:07 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.240 22:01:07 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.240 22:01:07 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.501 22:01:07 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.501 22:01:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.501 22:01:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.501 22:01:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.501 22:01:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.501 22:01:07 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.501 22:01:07 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.501 22:01:07 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.501 22:01:07 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.501 22:01:07 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.501 22:01:07 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.501 22:01:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.501 22:01:07 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.501 22:01:07 -- scripts/common.sh@345 -- # : 1 00:05:01.501 22:01:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.501 22:01:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.501 22:01:07 -- scripts/common.sh@365 -- # decimal 1 00:05:01.501 22:01:07 -- scripts/common.sh@353 -- # local d=1 00:05:01.501 22:01:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.501 22:01:07 -- scripts/common.sh@355 -- # echo 1 00:05:01.501 22:01:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.501 22:01:07 -- scripts/common.sh@366 -- # decimal 2 00:05:01.501 22:01:07 -- scripts/common.sh@353 -- # local d=2 00:05:01.501 22:01:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.501 22:01:07 -- scripts/common.sh@355 -- # echo 2 00:05:01.501 22:01:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.501 22:01:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.501 22:01:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.501 22:01:07 -- scripts/common.sh@368 -- # return 0 00:05:01.501 22:01:07 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.501 22:01:07 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.501 --rc genhtml_branch_coverage=1 00:05:01.501 --rc genhtml_function_coverage=1 00:05:01.501 --rc genhtml_legend=1 00:05:01.501 --rc geninfo_all_blocks=1 00:05:01.501 --rc geninfo_unexecuted_blocks=1 00:05:01.501 00:05:01.501 ' 00:05:01.501 22:01:07 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.501 --rc genhtml_branch_coverage=1 00:05:01.501 --rc genhtml_function_coverage=1 00:05:01.501 --rc genhtml_legend=1 00:05:01.501 --rc geninfo_all_blocks=1 00:05:01.501 --rc geninfo_unexecuted_blocks=1 00:05:01.501 00:05:01.501 ' 00:05:01.501 22:01:07 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.501 --rc genhtml_branch_coverage=1 00:05:01.501 --rc genhtml_function_coverage=1 00:05:01.501 --rc genhtml_legend=1 00:05:01.501 --rc geninfo_all_blocks=1 00:05:01.501 --rc geninfo_unexecuted_blocks=1 00:05:01.501 00:05:01.501 ' 00:05:01.501 22:01:07 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.501 --rc genhtml_branch_coverage=1 00:05:01.501 --rc genhtml_function_coverage=1 00:05:01.501 --rc genhtml_legend=1 00:05:01.501 --rc geninfo_all_blocks=1 00:05:01.501 --rc geninfo_unexecuted_blocks=1 00:05:01.501 00:05:01.501 ' 00:05:01.501 22:01:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.501 22:01:07 -- nvmf/common.sh@7 -- # uname -s 00:05:01.501 22:01:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.501 22:01:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.501 22:01:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.501 22:01:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.501 22:01:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.501 22:01:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.501 22:01:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.501 22:01:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.501 22:01:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.501 22:01:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.501 22:01:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86349a24-162f-435c-aa93-39d31211c65f 00:05:01.501 22:01:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=86349a24-162f-435c-aa93-39d31211c65f 00:05:01.501 22:01:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.501 22:01:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.501 22:01:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.501 22:01:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.501 22:01:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.501 22:01:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.501 22:01:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.501 22:01:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.501 22:01:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.501 22:01:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.501 22:01:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.501 22:01:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.501 22:01:07 -- paths/export.sh@5 -- # export PATH 00:05:01.501 22:01:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.501 22:01:07 -- nvmf/common.sh@51 -- # : 0 00:05:01.501 22:01:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.501 22:01:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.501 22:01:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.501 22:01:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.501 22:01:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.501 22:01:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.501 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.501 22:01:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.501 22:01:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.501 22:01:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.502 22:01:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.502 22:01:07 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.502 22:01:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.502 22:01:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.502 22:01:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.502 22:01:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.502 22:01:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.502 22:01:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.502 22:01:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.502 22:01:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.502 22:01:07 -- spdk/autotest.sh@48 -- # udevadm_pid=68325 00:05:01.502 22:01:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.502 22:01:07 -- pm/common@17 -- # local monitor 00:05:01.502 22:01:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.502 22:01:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.502 22:01:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.502 22:01:07 -- pm/common@25 -- # sleep 1 00:05:01.502 22:01:07 -- pm/common@21 -- # date +%s 00:05:01.502 22:01:07 -- pm/common@21 -- # date +%s 00:05:01.502 22:01:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734386467 00:05:01.502 22:01:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734386467 00:05:01.502 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734386467_collect-cpu-load.pm.log 00:05:01.502 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734386467_collect-vmstat.pm.log 00:05:02.486 22:01:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.486 22:01:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.486 22:01:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.486 22:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:02.486 22:01:08 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.486 22:01:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:02.486 22:01:08 -- common/autotest_common.sh@10 -- # set +x 00:05:02.486 22:01:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.486 22:01:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.486 22:01:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.486 22:01:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.486 22:01:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.486 22:01:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.486 22:01:08 -- common/autotest_common.sh@1457 -- # uname 00:05:02.486 22:01:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:02.486 22:01:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.486 22:01:08 -- common/autotest_common.sh@1477 -- # uname 00:05:02.486 22:01:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:02.486 22:01:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.486 22:01:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.486 lcov: LCOV version 1.15 00:05:02.486 22:01:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.413 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:17.413 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:32.327 22:01:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:32.327 22:01:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.328 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.328 22:01:37 -- spdk/autotest.sh@78 -- # rm -f 00:05:32.328 22:01:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.328 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:32.328 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:32.328 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:32.328 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:32.328 22:01:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:32.328 22:01:38 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:32.328 22:01:38 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:32.328 22:01:38 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:32.328 22:01:38 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:32.328 22:01:38 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:32.328 22:01:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:32.328 22:01:38 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:32.328 22:01:38 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:32.328 22:01:38 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:32.328 22:01:38 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:32.328 22:01:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:32.328 22:01:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.328 22:01:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.328 22:01:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:32.328 22:01:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:32.328 22:01:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.328 No valid GPT data, bailing 00:05:32.328 22:01:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.328 22:01:38 -- scripts/common.sh@394 -- # pt= 00:05:32.328 22:01:38 -- scripts/common.sh@395 -- # return 1 00:05:32.328 22:01:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.594 1+0 records in 00:05:32.594 1+0 records out 00:05:32.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0351602 s, 29.8 MB/s 00:05:32.594 22:01:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.594 22:01:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.594 22:01:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:32.594 22:01:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:32.594 22:01:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.594 No valid GPT data, bailing 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # pt= 00:05:32.594 22:01:38 -- scripts/common.sh@395 -- # return 1 00:05:32.594 22:01:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.594 1+0 records in 00:05:32.594 1+0 records out 00:05:32.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527538 s, 199 MB/s 00:05:32.594 22:01:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.594 22:01:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.594 22:01:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:32.594 22:01:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:32.594 22:01:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:32.594 No valid GPT data, bailing 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # pt= 00:05:32.594 22:01:38 -- scripts/common.sh@395 -- # return 1 00:05:32.594 22:01:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:32.594 1+0 records in 00:05:32.594 1+0 records out 00:05:32.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053211 s, 197 MB/s 00:05:32.594 22:01:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.594 22:01:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.594 22:01:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:32.594 22:01:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:32.594 22:01:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:32.594 No valid GPT data, bailing 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:32.594 22:01:38 -- scripts/common.sh@394 -- # pt= 00:05:32.594 22:01:38 -- scripts/common.sh@395 -- # return 1 00:05:32.594 22:01:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:32.594 1+0 records in 00:05:32.594 1+0 records out 00:05:32.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551407 s, 190 MB/s 00:05:32.594 22:01:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.594 22:01:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.594 22:01:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:32.594 22:01:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:32.594 22:01:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:32.855 No valid GPT data, bailing 00:05:32.855 22:01:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:32.855 22:01:39 -- scripts/common.sh@394 -- # pt= 00:05:32.855 22:01:39 -- scripts/common.sh@395 -- # return 1 00:05:32.855 22:01:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:32.855 1+0 records in 00:05:32.855 1+0 records out 00:05:32.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590579 s, 178 MB/s 00:05:32.855 22:01:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.855 22:01:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.855 22:01:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:32.855 22:01:39 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:32.855 22:01:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:32.855 No valid GPT data, bailing 00:05:32.855 22:01:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:32.855 22:01:39 -- scripts/common.sh@394 -- # pt= 00:05:32.855 22:01:39 -- scripts/common.sh@395 -- # return 1 00:05:32.855 22:01:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:32.855 1+0 records in 00:05:32.855 1+0 records out 00:05:32.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497282 s, 211 MB/s 00:05:32.855 22:01:39 -- spdk/autotest.sh@105 -- # sync 00:05:32.855 22:01:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.855 22:01:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.855 22:01:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:34.770 22:01:40 -- spdk/autotest.sh@111 -- # uname -s 00:05:34.770 22:01:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:34.770 22:01:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:34.770 22:01:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.291 Hugepages 00:05:35.291 node hugesize free / total 00:05:35.291 node0 1048576kB 0 / 0 00:05:35.291 node0 2048kB 0 / 0 00:05:35.291 00:05:35.291 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.551 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.551 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.551 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:35.551 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:35.812 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:35.812 22:01:41 -- spdk/autotest.sh@117 -- # uname -s 00:05:35.812 22:01:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:35.812 22:01:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:35.812 22:01:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.644 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.644 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.644 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.644 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.644 22:01:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:37.578 22:01:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:37.578 22:01:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:37.578 22:01:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.578 22:01:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:37.578 22:01:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:37.578 22:01:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:37.578 22:01:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.839 22:01:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.839 22:01:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:37.839 22:01:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:37.839 22:01:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:37.839 22:01:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.099 Waiting for block devices as requested 00:05:38.099 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.359 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.359 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.359 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.649 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:43.649 22:01:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:43.649 22:01:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1543 -- # continue 00:05:43.649 22:01:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:43.649 22:01:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1543 -- # continue 00:05:43.649 22:01:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:43.649 22:01:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1543 -- # continue 00:05:43.649 22:01:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:43.649 22:01:49 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:43.649 22:01:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:43.649 22:01:49 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:43.649 22:01:49 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:43.649 22:01:49 -- common/autotest_common.sh@1543 -- # continue 00:05:43.649 22:01:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:43.649 22:01:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.649 22:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.649 22:01:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:43.649 22:01:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.649 22:01:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.649 22:01:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.481 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.481 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.481 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.481 22:01:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:44.481 22:01:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.481 22:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.481 22:01:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:44.481 22:01:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:44.481 22:01:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.481 22:01:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:44.481 22:01:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:44.481 22:01:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:44.481 22:01:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:44.481 22:01:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:44.481 22:01:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:44.481 22:01:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:44.481 22:01:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.481 22:01:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:44.481 22:01:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.741 22:01:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:44.741 22:01:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:44.741 22:01:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:44.741 22:01:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.741 22:01:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:44.741 22:01:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.741 22:01:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:44.741 22:01:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.741 22:01:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:44.741 22:01:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:44.741 22:01:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.741 22:01:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:44.741 22:01:50 -- common/autotest_common.sh@1572 -- # return 0 00:05:44.741 22:01:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:44.741 22:01:50 -- common/autotest_common.sh@1580 -- # return 0 00:05:44.741 22:01:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:44.742 22:01:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:44.742 22:01:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.742 22:01:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.742 22:01:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:44.742 22:01:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.742 22:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.742 22:01:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:44.742 22:01:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.742 22:01:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.742 22:01:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.742 22:01:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.742 ************************************ 00:05:44.742 START TEST env 00:05:44.742 ************************************ 00:05:44.742 22:01:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.742 * Looking for test storage... 00:05:44.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.742 22:01:50 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.742 22:01:50 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.742 22:01:50 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.742 22:01:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.742 22:01:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.742 22:01:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.742 22:01:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.742 22:01:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.742 22:01:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.742 22:01:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.742 22:01:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.742 22:01:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.742 22:01:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.742 22:01:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.742 22:01:51 env -- scripts/common.sh@344 -- # case "$op" in 00:05:44.742 22:01:51 env -- scripts/common.sh@345 -- # : 1 00:05:44.742 22:01:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.742 22:01:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.742 22:01:51 env -- scripts/common.sh@365 -- # decimal 1 00:05:44.742 22:01:51 env -- scripts/common.sh@353 -- # local d=1 00:05:44.742 22:01:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.742 22:01:51 env -- scripts/common.sh@355 -- # echo 1 00:05:44.742 22:01:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.742 22:01:51 env -- scripts/common.sh@366 -- # decimal 2 00:05:44.742 22:01:51 env -- scripts/common.sh@353 -- # local d=2 00:05:44.742 22:01:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.742 22:01:51 env -- scripts/common.sh@355 -- # echo 2 00:05:44.742 22:01:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.742 22:01:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.742 22:01:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.742 22:01:51 env -- scripts/common.sh@368 -- # return 0 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.742 --rc genhtml_branch_coverage=1 00:05:44.742 --rc genhtml_function_coverage=1 00:05:44.742 --rc genhtml_legend=1 00:05:44.742 --rc geninfo_all_blocks=1 00:05:44.742 --rc geninfo_unexecuted_blocks=1 00:05:44.742 00:05:44.742 ' 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.742 --rc genhtml_branch_coverage=1 00:05:44.742 --rc genhtml_function_coverage=1 00:05:44.742 --rc genhtml_legend=1 00:05:44.742 --rc geninfo_all_blocks=1 00:05:44.742 --rc geninfo_unexecuted_blocks=1 00:05:44.742 00:05:44.742 ' 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.742 --rc genhtml_branch_coverage=1 00:05:44.742 --rc genhtml_function_coverage=1 00:05:44.742 --rc genhtml_legend=1 00:05:44.742 --rc geninfo_all_blocks=1 00:05:44.742 --rc geninfo_unexecuted_blocks=1 00:05:44.742 00:05:44.742 ' 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.742 --rc genhtml_branch_coverage=1 00:05:44.742 --rc genhtml_function_coverage=1 00:05:44.742 --rc genhtml_legend=1 00:05:44.742 --rc geninfo_all_blocks=1 00:05:44.742 --rc geninfo_unexecuted_blocks=1 00:05:44.742 00:05:44.742 ' 00:05:44.742 22:01:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.742 22:01:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.742 22:01:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.742 ************************************ 00:05:44.742 START TEST env_memory 00:05:44.742 ************************************ 00:05:44.742 22:01:51 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.742 00:05:44.742 00:05:44.742 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.742 http://cunit.sourceforge.net/ 00:05:44.742 00:05:44.742 00:05:44.742 Suite: memory 00:05:44.742 Test: alloc and free memory map ...[2024-12-16 22:01:51.082883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:45.002 passed 00:05:45.002 Test: mem map translation ...[2024-12-16 22:01:51.121628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:45.002 [2024-12-16 22:01:51.121746] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:45.002 [2024-12-16 22:01:51.121860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:45.002 [2024-12-16 22:01:51.121901] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:45.002 passed 00:05:45.002 Test: mem map registration ...[2024-12-16 22:01:51.190085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:45.002 [2024-12-16 22:01:51.190220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:45.002 passed 00:05:45.002 Test: mem map adjacent registrations ...passed 00:05:45.002 00:05:45.002 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.002 suites 1 1 n/a 0 0 00:05:45.002 tests 4 4 4 0 0 00:05:45.002 asserts 152 152 152 0 n/a 00:05:45.002 00:05:45.002 Elapsed time = 0.233 seconds 00:05:45.002 ************************************ 00:05:45.002 END TEST env_memory 00:05:45.002 ************************************ 00:05:45.002 00:05:45.002 real 0m0.268s 00:05:45.002 user 0m0.242s 00:05:45.002 sys 0m0.019s 00:05:45.002 22:01:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.002 22:01:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:45.002 22:01:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.002 22:01:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.002 22:01:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.002 22:01:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.002 ************************************ 00:05:45.002 START TEST env_vtophys 00:05:45.002 ************************************ 00:05:45.002 22:01:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.263 EAL: lib.eal log level changed from notice to debug 00:05:45.263 EAL: Detected lcore 0 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 1 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 2 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 3 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 4 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 5 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 6 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 7 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 8 as core 0 on socket 0 00:05:45.263 EAL: Detected lcore 9 as core 0 on socket 0 00:05:45.263 EAL: Maximum logical cores by configuration: 128 00:05:45.263 EAL: Detected CPU lcores: 10 00:05:45.263 EAL: Detected NUMA nodes: 1 00:05:45.263 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:45.263 EAL: Detected shared linkage of DPDK 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:45.263 EAL: Registered [vdev] bus. 00:05:45.263 EAL: bus.vdev log level changed from disabled to notice 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:45.263 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:45.263 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:45.263 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:45.263 EAL: No shared files mode enabled, IPC will be disabled 00:05:45.263 EAL: No shared files mode enabled, IPC is disabled 00:05:45.263 EAL: Selected IOVA mode 'PA' 00:05:45.263 EAL: Probing VFIO support... 00:05:45.263 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.263 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:45.263 EAL: Ask a virtual area of 0x2e000 bytes 00:05:45.263 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:45.263 EAL: Setting up physically contiguous memory... 00:05:45.263 EAL: Setting maximum number of open files to 524288 00:05:45.263 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:45.263 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:45.263 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.263 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:45.263 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.263 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.263 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:45.263 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:45.264 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.264 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:45.264 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.264 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.264 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:45.264 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:45.264 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.264 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:45.264 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.264 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.264 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:45.264 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:45.264 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.264 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:45.264 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.264 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.264 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:45.264 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:45.264 EAL: Hugepages will be freed exactly as allocated. 00:05:45.264 EAL: No shared files mode enabled, IPC is disabled 00:05:45.264 EAL: No shared files mode enabled, IPC is disabled 00:05:45.264 EAL: TSC frequency is ~2600000 KHz 00:05:45.264 EAL: Main lcore 0 is ready (tid=7f686ea33a40;cpuset=[0]) 00:05:45.264 EAL: Trying to obtain current memory policy. 00:05:45.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.264 EAL: Restoring previous memory policy: 0 00:05:45.264 EAL: request: mp_malloc_sync 00:05:45.264 EAL: No shared files mode enabled, IPC is disabled 00:05:45.264 EAL: Heap on socket 0 was expanded by 2MB 00:05:45.264 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.264 EAL: No shared files mode enabled, IPC is disabled 00:05:45.264 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:45.264 EAL: Mem event callback 'spdk:(nil)' registered 00:05:45.264 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.264 00:05:45.264 00:05:45.264 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.264 http://cunit.sourceforge.net/ 00:05:45.264 00:05:45.264 00:05:45.264 Suite: components_suite 00:05:45.525 Test: vtophys_malloc_test ...passed 00:05:45.525 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was shrunk by 66MB 00:05:45.525 EAL: Trying to obtain current memory policy. 00:05:45.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.525 EAL: Restoring previous memory policy: 4 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.525 EAL: request: mp_malloc_sync 00:05:45.525 EAL: No shared files mode enabled, IPC is disabled 00:05:45.525 EAL: Heap on socket 0 was expanded by 130MB 00:05:45.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.786 EAL: request: mp_malloc_sync 00:05:45.786 EAL: No shared files mode enabled, IPC is disabled 00:05:45.786 EAL: Heap on socket 0 was shrunk by 130MB 00:05:45.786 EAL: Trying to obtain current memory policy. 00:05:45.786 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.786 EAL: Restoring previous memory policy: 4 00:05:45.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.786 EAL: request: mp_malloc_sync 00:05:45.786 EAL: No shared files mode enabled, IPC is disabled 00:05:45.786 EAL: Heap on socket 0 was expanded by 258MB 00:05:45.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.786 EAL: request: mp_malloc_sync 00:05:45.786 EAL: No shared files mode enabled, IPC is disabled 00:05:45.786 EAL: Heap on socket 0 was shrunk by 258MB 00:05:45.786 EAL: Trying to obtain current memory policy. 00:05:45.786 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.786 EAL: Restoring previous memory policy: 4 00:05:45.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.786 EAL: request: mp_malloc_sync 00:05:45.786 EAL: No shared files mode enabled, IPC is disabled 00:05:45.786 EAL: Heap on socket 0 was expanded by 514MB 00:05:45.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.046 EAL: request: mp_malloc_sync 00:05:46.046 EAL: No shared files mode enabled, IPC is disabled 00:05:46.046 EAL: Heap on socket 0 was shrunk by 514MB 00:05:46.046 EAL: Trying to obtain current memory policy. 00:05:46.046 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.046 EAL: Restoring previous memory policy: 4 00:05:46.046 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.046 EAL: request: mp_malloc_sync 00:05:46.046 EAL: No shared files mode enabled, IPC is disabled 00:05:46.046 EAL: Heap on socket 0 was expanded by 1026MB 00:05:46.046 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.308 passed 00:05:46.308 00:05:46.308 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.308 suites 1 1 n/a 0 0 00:05:46.308 tests 2 2 2 0 0 00:05:46.308 asserts 5274 5274 5274 0 n/a 00:05:46.308 00:05:46.308 Elapsed time = 0.924 seconds 00:05:46.308 EAL: request: mp_malloc_sync 00:05:46.308 EAL: No shared files mode enabled, IPC is disabled 00:05:46.308 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:46.308 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.308 EAL: request: mp_malloc_sync 00:05:46.308 EAL: No shared files mode enabled, IPC is disabled 00:05:46.308 EAL: Heap on socket 0 was shrunk by 2MB 00:05:46.308 EAL: No shared files mode enabled, IPC is disabled 00:05:46.308 EAL: No shared files mode enabled, IPC is disabled 00:05:46.308 EAL: No shared files mode enabled, IPC is disabled 00:05:46.308 ************************************ 00:05:46.308 END TEST env_vtophys 00:05:46.308 ************************************ 00:05:46.308 00:05:46.308 real 0m1.153s 00:05:46.308 user 0m0.459s 00:05:46.308 sys 0m0.567s 00:05:46.308 22:01:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.308 22:01:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:46.308 22:01:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.308 22:01:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.308 22:01:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.308 22:01:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.308 ************************************ 00:05:46.308 START TEST env_pci 00:05:46.308 ************************************ 00:05:46.308 22:01:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.308 00:05:46.308 00:05:46.308 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.308 http://cunit.sourceforge.net/ 00:05:46.308 00:05:46.308 00:05:46.308 Suite: pci 00:05:46.308 Test: pci_hook ...[2024-12-16 22:01:52.555326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71039 has claimed it 00:05:46.308 passed 00:05:46.308 00:05:46.308 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.308 suites 1 1 n/a 0 0 00:05:46.308 tests 1 1 1 0 0 00:05:46.308 asserts 25 25 25 0 n/a 00:05:46.308 00:05:46.308 Elapsed time = 0.003 seconds 00:05:46.308 EAL: Cannot find device (10000:00:01.0) 00:05:46.308 EAL: Failed to attach device on primary process 00:05:46.308 ************************************ 00:05:46.308 END TEST env_pci 00:05:46.308 ************************************ 00:05:46.308 00:05:46.308 real 0m0.053s 00:05:46.308 user 0m0.019s 00:05:46.308 sys 0m0.033s 00:05:46.308 22:01:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.308 22:01:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:46.308 22:01:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:46.308 22:01:52 env -- env/env.sh@15 -- # uname 00:05:46.308 22:01:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:46.308 22:01:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:46.308 22:01:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.308 22:01:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:46.308 22:01:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.308 22:01:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.308 ************************************ 00:05:46.308 START TEST env_dpdk_post_init 00:05:46.308 ************************************ 00:05:46.308 22:01:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:46.567 EAL: Detected CPU lcores: 10 00:05:46.567 EAL: Detected NUMA nodes: 1 00:05:46.567 EAL: Detected shared linkage of DPDK 00:05:46.567 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.567 EAL: Selected IOVA mode 'PA' 00:05:46.567 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.567 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:46.567 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:46.567 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:46.567 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:46.567 Starting DPDK initialization... 00:05:46.567 Starting SPDK post initialization... 00:05:46.567 SPDK NVMe probe 00:05:46.568 Attaching to 0000:00:10.0 00:05:46.568 Attaching to 0000:00:11.0 00:05:46.568 Attaching to 0000:00:12.0 00:05:46.568 Attaching to 0000:00:13.0 00:05:46.568 Attached to 0000:00:10.0 00:05:46.568 Attached to 0000:00:11.0 00:05:46.568 Attached to 0000:00:13.0 00:05:46.568 Attached to 0000:00:12.0 00:05:46.568 Cleaning up... 00:05:46.568 00:05:46.568 real 0m0.221s 00:05:46.568 user 0m0.069s 00:05:46.568 sys 0m0.054s 00:05:46.568 22:01:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.568 22:01:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.568 ************************************ 00:05:46.568 END TEST env_dpdk_post_init 00:05:46.568 ************************************ 00:05:46.568 22:01:52 env -- env/env.sh@26 -- # uname 00:05:46.568 22:01:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.568 22:01:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.568 22:01:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.568 22:01:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.568 22:01:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.568 ************************************ 00:05:46.568 START TEST env_mem_callbacks 00:05:46.568 ************************************ 00:05:46.568 22:01:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.568 EAL: Detected CPU lcores: 10 00:05:46.568 EAL: Detected NUMA nodes: 1 00:05:46.568 EAL: Detected shared linkage of DPDK 00:05:46.826 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.826 EAL: Selected IOVA mode 'PA' 00:05:46.826 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.826 00:05:46.826 00:05:46.826 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.826 http://cunit.sourceforge.net/ 00:05:46.826 00:05:46.826 00:05:46.826 Suite: memory 00:05:46.826 Test: test ... 00:05:46.826 register 0x200000200000 2097152 00:05:46.826 malloc 3145728 00:05:46.826 register 0x200000400000 4194304 00:05:46.826 buf 0x200000500000 len 3145728 PASSED 00:05:46.826 malloc 64 00:05:46.826 buf 0x2000004fff40 len 64 PASSED 00:05:46.826 malloc 4194304 00:05:46.826 register 0x200000800000 6291456 00:05:46.826 buf 0x200000a00000 len 4194304 PASSED 00:05:46.826 free 0x200000500000 3145728 00:05:46.826 free 0x2000004fff40 64 00:05:46.826 unregister 0x200000400000 4194304 PASSED 00:05:46.826 free 0x200000a00000 4194304 00:05:46.826 unregister 0x200000800000 6291456 PASSED 00:05:46.826 malloc 8388608 00:05:46.826 register 0x200000400000 10485760 00:05:46.826 buf 0x200000600000 len 8388608 PASSED 00:05:46.826 free 0x200000600000 8388608 00:05:46.826 unregister 0x200000400000 10485760 PASSED 00:05:46.826 passed 00:05:46.826 00:05:46.826 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.826 suites 1 1 n/a 0 0 00:05:46.826 tests 1 1 1 0 0 00:05:46.826 asserts 15 15 15 0 n/a 00:05:46.826 00:05:46.826 Elapsed time = 0.010 seconds 00:05:46.826 00:05:46.826 real 0m0.169s 00:05:46.826 user 0m0.025s 00:05:46.826 sys 0m0.041s 00:05:46.826 22:01:53 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.826 22:01:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.826 ************************************ 00:05:46.826 END TEST env_mem_callbacks 00:05:46.826 ************************************ 00:05:46.826 00:05:46.826 real 0m2.212s 00:05:46.826 user 0m0.966s 00:05:46.826 sys 0m0.917s 00:05:46.826 22:01:53 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.826 22:01:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.826 ************************************ 00:05:46.826 END TEST env 00:05:46.826 ************************************ 00:05:46.826 22:01:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.826 22:01:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.826 22:01:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.826 22:01:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.826 ************************************ 00:05:46.826 START TEST rpc 00:05:46.826 ************************************ 00:05:46.826 22:01:53 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.085 * Looking for test storage... 00:05:47.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.085 22:01:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.085 22:01:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.085 22:01:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.085 22:01:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.085 22:01:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.085 22:01:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.085 22:01:53 rpc -- scripts/common.sh@345 -- # : 1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.085 22:01:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.085 22:01:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.085 22:01:53 rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.085 22:01:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.085 22:01:53 rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.085 22:01:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.085 22:01:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.085 22:01:53 rpc -- scripts/common.sh@368 -- # return 0 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.085 --rc genhtml_branch_coverage=1 00:05:47.085 --rc genhtml_function_coverage=1 00:05:47.085 --rc genhtml_legend=1 00:05:47.085 --rc geninfo_all_blocks=1 00:05:47.085 --rc geninfo_unexecuted_blocks=1 00:05:47.085 00:05:47.085 ' 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.085 --rc genhtml_branch_coverage=1 00:05:47.085 --rc genhtml_function_coverage=1 00:05:47.085 --rc genhtml_legend=1 00:05:47.085 --rc geninfo_all_blocks=1 00:05:47.085 --rc geninfo_unexecuted_blocks=1 00:05:47.085 00:05:47.085 ' 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.085 --rc genhtml_branch_coverage=1 00:05:47.085 --rc genhtml_function_coverage=1 00:05:47.085 --rc genhtml_legend=1 00:05:47.085 --rc geninfo_all_blocks=1 00:05:47.085 --rc geninfo_unexecuted_blocks=1 00:05:47.085 00:05:47.085 ' 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.085 --rc genhtml_branch_coverage=1 00:05:47.085 --rc genhtml_function_coverage=1 00:05:47.085 --rc genhtml_legend=1 00:05:47.085 --rc geninfo_all_blocks=1 00:05:47.085 --rc geninfo_unexecuted_blocks=1 00:05:47.085 00:05:47.085 ' 00:05:47.085 22:01:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71166 00:05:47.085 22:01:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.085 22:01:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71166 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 71166 ']' 00:05:47.085 22:01:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.085 22:01:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.085 [2024-12-16 22:01:53.341005] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:47.085 [2024-12-16 22:01:53.341276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:05:47.343 [2024-12-16 22:01:53.494655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.343 [2024-12-16 22:01:53.512319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.343 [2024-12-16 22:01:53.512361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71166' to capture a snapshot of events at runtime. 00:05:47.343 [2024-12-16 22:01:53.512376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.343 [2024-12-16 22:01:53.512388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.343 [2024-12-16 22:01:53.512397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71166 for offline analysis/debug. 00:05:47.343 [2024-12-16 22:01:53.512691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.910 22:01:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.910 22:01:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.910 22:01:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.910 22:01:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.910 22:01:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.910 22:01:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.910 22:01:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.910 22:01:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.910 22:01:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 ************************************ 00:05:47.910 START TEST rpc_integrity 00:05:47.910 ************************************ 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.910 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.910 { 00:05:47.910 "name": "Malloc0", 00:05:47.910 "aliases": [ 00:05:47.910 "52d0e1ef-a99c-445c-86e6-fc2e133c049d" 00:05:47.910 ], 00:05:47.910 "product_name": "Malloc disk", 00:05:47.910 "block_size": 512, 00:05:47.910 "num_blocks": 16384, 00:05:47.910 "uuid": "52d0e1ef-a99c-445c-86e6-fc2e133c049d", 00:05:47.910 "assigned_rate_limits": { 00:05:47.910 "rw_ios_per_sec": 0, 00:05:47.910 "rw_mbytes_per_sec": 0, 00:05:47.910 "r_mbytes_per_sec": 0, 00:05:47.910 "w_mbytes_per_sec": 0 00:05:47.910 }, 00:05:47.910 "claimed": false, 00:05:47.910 "zoned": false, 00:05:47.910 "supported_io_types": { 00:05:47.910 "read": true, 00:05:47.910 "write": true, 00:05:47.910 "unmap": true, 00:05:47.910 "flush": true, 00:05:47.910 "reset": true, 00:05:47.910 "nvme_admin": false, 00:05:47.910 "nvme_io": false, 00:05:47.910 "nvme_io_md": false, 00:05:47.910 "write_zeroes": true, 00:05:47.910 "zcopy": true, 00:05:47.910 "get_zone_info": false, 00:05:47.910 "zone_management": false, 00:05:47.910 "zone_append": false, 00:05:47.910 "compare": false, 00:05:47.910 "compare_and_write": false, 00:05:47.910 "abort": true, 00:05:47.910 "seek_hole": false, 00:05:47.910 "seek_data": false, 00:05:47.910 "copy": true, 00:05:47.910 "nvme_iov_md": false 00:05:47.910 }, 00:05:47.910 "memory_domains": [ 00:05:47.910 { 00:05:47.910 "dma_device_id": "system", 00:05:47.910 "dma_device_type": 1 00:05:47.910 }, 00:05:47.910 { 00:05:47.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.910 "dma_device_type": 2 00:05:47.910 } 00:05:47.910 ], 00:05:47.910 "driver_specific": {} 00:05:47.910 } 00:05:47.910 ]' 00:05:47.910 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.169 [2024-12-16 22:01:54.275283] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.169 [2024-12-16 22:01:54.275335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.169 [2024-12-16 22:01:54.275361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:48.169 [2024-12-16 22:01:54.275370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.169 [2024-12-16 22:01:54.277579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.169 [2024-12-16 22:01:54.277708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.169 Passthru0 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.169 { 00:05:48.169 "name": "Malloc0", 00:05:48.169 "aliases": [ 00:05:48.169 "52d0e1ef-a99c-445c-86e6-fc2e133c049d" 00:05:48.169 ], 00:05:48.169 "product_name": "Malloc disk", 00:05:48.169 "block_size": 512, 00:05:48.169 "num_blocks": 16384, 00:05:48.169 "uuid": "52d0e1ef-a99c-445c-86e6-fc2e133c049d", 00:05:48.169 "assigned_rate_limits": { 00:05:48.169 "rw_ios_per_sec": 0, 00:05:48.169 "rw_mbytes_per_sec": 0, 00:05:48.169 "r_mbytes_per_sec": 0, 00:05:48.169 "w_mbytes_per_sec": 0 00:05:48.169 }, 00:05:48.169 "claimed": true, 00:05:48.169 "claim_type": "exclusive_write", 00:05:48.169 "zoned": false, 00:05:48.169 "supported_io_types": { 00:05:48.169 "read": true, 00:05:48.169 "write": true, 00:05:48.169 "unmap": true, 00:05:48.169 "flush": true, 00:05:48.169 "reset": true, 00:05:48.169 "nvme_admin": false, 00:05:48.169 "nvme_io": false, 00:05:48.169 "nvme_io_md": false, 00:05:48.169 "write_zeroes": true, 00:05:48.169 "zcopy": true, 00:05:48.169 "get_zone_info": false, 00:05:48.169 "zone_management": false, 00:05:48.169 "zone_append": false, 00:05:48.169 "compare": false, 00:05:48.169 "compare_and_write": false, 00:05:48.169 "abort": true, 00:05:48.169 "seek_hole": false, 00:05:48.169 "seek_data": false, 00:05:48.169 "copy": true, 00:05:48.169 "nvme_iov_md": false 00:05:48.169 }, 00:05:48.169 "memory_domains": [ 00:05:48.169 { 00:05:48.169 "dma_device_id": "system", 00:05:48.169 "dma_device_type": 1 00:05:48.169 }, 00:05:48.169 { 00:05:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.169 "dma_device_type": 2 00:05:48.169 } 00:05:48.169 ], 00:05:48.169 "driver_specific": {} 00:05:48.169 }, 00:05:48.169 { 00:05:48.169 "name": "Passthru0", 00:05:48.169 "aliases": [ 00:05:48.169 "8ba5d055-84d7-5d79-8f36-7fcb815470fc" 00:05:48.169 ], 00:05:48.169 "product_name": "passthru", 00:05:48.169 "block_size": 512, 00:05:48.169 "num_blocks": 16384, 00:05:48.169 "uuid": "8ba5d055-84d7-5d79-8f36-7fcb815470fc", 00:05:48.169 "assigned_rate_limits": { 00:05:48.169 "rw_ios_per_sec": 0, 00:05:48.169 "rw_mbytes_per_sec": 0, 00:05:48.169 "r_mbytes_per_sec": 0, 00:05:48.169 "w_mbytes_per_sec": 0 00:05:48.169 }, 00:05:48.169 "claimed": false, 00:05:48.169 "zoned": false, 00:05:48.169 "supported_io_types": { 00:05:48.169 "read": true, 00:05:48.169 "write": true, 00:05:48.169 "unmap": true, 00:05:48.169 "flush": true, 00:05:48.169 "reset": true, 00:05:48.169 "nvme_admin": false, 00:05:48.169 "nvme_io": false, 00:05:48.169 "nvme_io_md": false, 00:05:48.169 "write_zeroes": true, 00:05:48.169 "zcopy": true, 00:05:48.169 "get_zone_info": false, 00:05:48.169 "zone_management": false, 00:05:48.169 "zone_append": false, 00:05:48.169 "compare": false, 00:05:48.169 "compare_and_write": false, 00:05:48.169 "abort": true, 00:05:48.169 "seek_hole": false, 00:05:48.169 "seek_data": false, 00:05:48.169 "copy": true, 00:05:48.169 "nvme_iov_md": false 00:05:48.169 }, 00:05:48.169 "memory_domains": [ 00:05:48.169 { 00:05:48.169 "dma_device_id": "system", 00:05:48.169 "dma_device_type": 1 00:05:48.169 }, 00:05:48.169 { 00:05:48.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.169 "dma_device_type": 2 00:05:48.169 } 00:05:48.169 ], 00:05:48.169 "driver_specific": { 00:05:48.169 "passthru": { 00:05:48.169 "name": "Passthru0", 00:05:48.169 "base_bdev_name": "Malloc0" 00:05:48.169 } 00:05:48.169 } 00:05:48.169 } 00:05:48.169 ]' 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.169 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.169 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.170 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.170 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.170 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.170 ************************************ 00:05:48.170 END TEST rpc_integrity 00:05:48.170 ************************************ 00:05:48.170 22:01:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.170 00:05:48.170 real 0m0.226s 00:05:48.170 user 0m0.122s 00:05:48.170 sys 0m0.043s 00:05:48.170 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.170 22:01:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.170 22:01:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.170 22:01:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 ************************************ 00:05:48.170 START TEST rpc_plugins 00:05:48.170 ************************************ 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.170 { 00:05:48.170 "name": "Malloc1", 00:05:48.170 "aliases": [ 00:05:48.170 "c88b5f2a-23cf-47cc-a284-417cd205936a" 00:05:48.170 ], 00:05:48.170 "product_name": "Malloc disk", 00:05:48.170 "block_size": 4096, 00:05:48.170 "num_blocks": 256, 00:05:48.170 "uuid": "c88b5f2a-23cf-47cc-a284-417cd205936a", 00:05:48.170 "assigned_rate_limits": { 00:05:48.170 "rw_ios_per_sec": 0, 00:05:48.170 "rw_mbytes_per_sec": 0, 00:05:48.170 "r_mbytes_per_sec": 0, 00:05:48.170 "w_mbytes_per_sec": 0 00:05:48.170 }, 00:05:48.170 "claimed": false, 00:05:48.170 "zoned": false, 00:05:48.170 "supported_io_types": { 00:05:48.170 "read": true, 00:05:48.170 "write": true, 00:05:48.170 "unmap": true, 00:05:48.170 "flush": true, 00:05:48.170 "reset": true, 00:05:48.170 "nvme_admin": false, 00:05:48.170 "nvme_io": false, 00:05:48.170 "nvme_io_md": false, 00:05:48.170 "write_zeroes": true, 00:05:48.170 "zcopy": true, 00:05:48.170 "get_zone_info": false, 00:05:48.170 "zone_management": false, 00:05:48.170 "zone_append": false, 00:05:48.170 "compare": false, 00:05:48.170 "compare_and_write": false, 00:05:48.170 "abort": true, 00:05:48.170 "seek_hole": false, 00:05:48.170 "seek_data": false, 00:05:48.170 "copy": true, 00:05:48.170 "nvme_iov_md": false 00:05:48.170 }, 00:05:48.170 "memory_domains": [ 00:05:48.170 { 00:05:48.170 "dma_device_id": "system", 00:05:48.170 "dma_device_type": 1 00:05:48.170 }, 00:05:48.170 { 00:05:48.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.170 "dma_device_type": 2 00:05:48.170 } 00:05:48.170 ], 00:05:48.170 "driver_specific": {} 00:05:48.170 } 00:05:48.170 ]' 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.170 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.429 ************************************ 00:05:48.429 END TEST rpc_plugins 00:05:48.429 ************************************ 00:05:48.429 22:01:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.429 00:05:48.429 real 0m0.114s 00:05:48.429 user 0m0.066s 00:05:48.429 sys 0m0.014s 00:05:48.429 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.429 22:01:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.429 22:01:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.429 22:01:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.429 22:01:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.429 22:01:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.429 ************************************ 00:05:48.429 START TEST rpc_trace_cmd_test 00:05:48.429 ************************************ 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.429 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71166", 00:05:48.429 "tpoint_group_mask": "0x8", 00:05:48.429 "iscsi_conn": { 00:05:48.429 "mask": "0x2", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "scsi": { 00:05:48.429 "mask": "0x4", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "bdev": { 00:05:48.429 "mask": "0x8", 00:05:48.429 "tpoint_mask": "0xffffffffffffffff" 00:05:48.429 }, 00:05:48.429 "nvmf_rdma": { 00:05:48.429 "mask": "0x10", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "nvmf_tcp": { 00:05:48.429 "mask": "0x20", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "ftl": { 00:05:48.429 "mask": "0x40", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "blobfs": { 00:05:48.429 "mask": "0x80", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "dsa": { 00:05:48.429 "mask": "0x200", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "thread": { 00:05:48.429 "mask": "0x400", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "nvme_pcie": { 00:05:48.429 "mask": "0x800", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "iaa": { 00:05:48.429 "mask": "0x1000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "nvme_tcp": { 00:05:48.429 "mask": "0x2000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "bdev_nvme": { 00:05:48.429 "mask": "0x4000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "sock": { 00:05:48.429 "mask": "0x8000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "blob": { 00:05:48.429 "mask": "0x10000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "bdev_raid": { 00:05:48.429 "mask": "0x20000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 }, 00:05:48.429 "scheduler": { 00:05:48.429 "mask": "0x40000", 00:05:48.429 "tpoint_mask": "0x0" 00:05:48.429 } 00:05:48.429 }' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.429 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.688 ************************************ 00:05:48.688 END TEST rpc_trace_cmd_test 00:05:48.688 ************************************ 00:05:48.688 22:01:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.688 00:05:48.688 real 0m0.172s 00:05:48.688 user 0m0.142s 00:05:48.688 sys 0m0.021s 00:05:48.688 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.688 22:01:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 22:01:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.688 22:01:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.688 22:01:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.688 22:01:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.688 22:01:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.688 22:01:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 ************************************ 00:05:48.688 START TEST rpc_daemon_integrity 00:05:48.688 ************************************ 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.688 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.688 { 00:05:48.688 "name": "Malloc2", 00:05:48.688 "aliases": [ 00:05:48.688 "0816ae5c-3a48-4f3c-9b0b-345adc34f475" 00:05:48.688 ], 00:05:48.688 "product_name": "Malloc disk", 00:05:48.688 "block_size": 512, 00:05:48.688 "num_blocks": 16384, 00:05:48.688 "uuid": "0816ae5c-3a48-4f3c-9b0b-345adc34f475", 00:05:48.688 "assigned_rate_limits": { 00:05:48.688 "rw_ios_per_sec": 0, 00:05:48.688 "rw_mbytes_per_sec": 0, 00:05:48.688 "r_mbytes_per_sec": 0, 00:05:48.688 "w_mbytes_per_sec": 0 00:05:48.688 }, 00:05:48.688 "claimed": false, 00:05:48.688 "zoned": false, 00:05:48.688 "supported_io_types": { 00:05:48.688 "read": true, 00:05:48.688 "write": true, 00:05:48.688 "unmap": true, 00:05:48.688 "flush": true, 00:05:48.688 "reset": true, 00:05:48.689 "nvme_admin": false, 00:05:48.689 "nvme_io": false, 00:05:48.689 "nvme_io_md": false, 00:05:48.689 "write_zeroes": true, 00:05:48.689 "zcopy": true, 00:05:48.689 "get_zone_info": false, 00:05:48.689 "zone_management": false, 00:05:48.689 "zone_append": false, 00:05:48.689 "compare": false, 00:05:48.689 "compare_and_write": false, 00:05:48.689 "abort": true, 00:05:48.689 "seek_hole": false, 00:05:48.689 "seek_data": false, 00:05:48.689 "copy": true, 00:05:48.689 "nvme_iov_md": false 00:05:48.689 }, 00:05:48.689 "memory_domains": [ 00:05:48.689 { 00:05:48.689 "dma_device_id": "system", 00:05:48.689 "dma_device_type": 1 00:05:48.689 }, 00:05:48.689 { 00:05:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.689 "dma_device_type": 2 00:05:48.689 } 00:05:48.689 ], 00:05:48.689 "driver_specific": {} 00:05:48.689 } 00:05:48.689 ]' 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.689 [2024-12-16 22:01:54.947570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.689 [2024-12-16 22:01:54.947618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.689 [2024-12-16 22:01:54.947646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:48.689 [2024-12-16 22:01:54.947654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.689 [2024-12-16 22:01:54.949822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.689 [2024-12-16 22:01:54.949863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.689 Passthru0 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.689 { 00:05:48.689 "name": "Malloc2", 00:05:48.689 "aliases": [ 00:05:48.689 "0816ae5c-3a48-4f3c-9b0b-345adc34f475" 00:05:48.689 ], 00:05:48.689 "product_name": "Malloc disk", 00:05:48.689 "block_size": 512, 00:05:48.689 "num_blocks": 16384, 00:05:48.689 "uuid": "0816ae5c-3a48-4f3c-9b0b-345adc34f475", 00:05:48.689 "assigned_rate_limits": { 00:05:48.689 "rw_ios_per_sec": 0, 00:05:48.689 "rw_mbytes_per_sec": 0, 00:05:48.689 "r_mbytes_per_sec": 0, 00:05:48.689 "w_mbytes_per_sec": 0 00:05:48.689 }, 00:05:48.689 "claimed": true, 00:05:48.689 "claim_type": "exclusive_write", 00:05:48.689 "zoned": false, 00:05:48.689 "supported_io_types": { 00:05:48.689 "read": true, 00:05:48.689 "write": true, 00:05:48.689 "unmap": true, 00:05:48.689 "flush": true, 00:05:48.689 "reset": true, 00:05:48.689 "nvme_admin": false, 00:05:48.689 "nvme_io": false, 00:05:48.689 "nvme_io_md": false, 00:05:48.689 "write_zeroes": true, 00:05:48.689 "zcopy": true, 00:05:48.689 "get_zone_info": false, 00:05:48.689 "zone_management": false, 00:05:48.689 "zone_append": false, 00:05:48.689 "compare": false, 00:05:48.689 "compare_and_write": false, 00:05:48.689 "abort": true, 00:05:48.689 "seek_hole": false, 00:05:48.689 "seek_data": false, 00:05:48.689 "copy": true, 00:05:48.689 "nvme_iov_md": false 00:05:48.689 }, 00:05:48.689 "memory_domains": [ 00:05:48.689 { 00:05:48.689 "dma_device_id": "system", 00:05:48.689 "dma_device_type": 1 00:05:48.689 }, 00:05:48.689 { 00:05:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.689 "dma_device_type": 2 00:05:48.689 } 00:05:48.689 ], 00:05:48.689 "driver_specific": {} 00:05:48.689 }, 00:05:48.689 { 00:05:48.689 "name": "Passthru0", 00:05:48.689 "aliases": [ 00:05:48.689 "b485d6a2-ce07-5945-ab59-23f20a9f3cc9" 00:05:48.689 ], 00:05:48.689 "product_name": "passthru", 00:05:48.689 "block_size": 512, 00:05:48.689 "num_blocks": 16384, 00:05:48.689 "uuid": "b485d6a2-ce07-5945-ab59-23f20a9f3cc9", 00:05:48.689 "assigned_rate_limits": { 00:05:48.689 "rw_ios_per_sec": 0, 00:05:48.689 "rw_mbytes_per_sec": 0, 00:05:48.689 "r_mbytes_per_sec": 0, 00:05:48.689 "w_mbytes_per_sec": 0 00:05:48.689 }, 00:05:48.689 "claimed": false, 00:05:48.689 "zoned": false, 00:05:48.689 "supported_io_types": { 00:05:48.689 "read": true, 00:05:48.689 "write": true, 00:05:48.689 "unmap": true, 00:05:48.689 "flush": true, 00:05:48.689 "reset": true, 00:05:48.689 "nvme_admin": false, 00:05:48.689 "nvme_io": false, 00:05:48.689 "nvme_io_md": false, 00:05:48.689 "write_zeroes": true, 00:05:48.689 "zcopy": true, 00:05:48.689 "get_zone_info": false, 00:05:48.689 "zone_management": false, 00:05:48.689 "zone_append": false, 00:05:48.689 "compare": false, 00:05:48.689 "compare_and_write": false, 00:05:48.689 "abort": true, 00:05:48.689 "seek_hole": false, 00:05:48.689 "seek_data": false, 00:05:48.689 "copy": true, 00:05:48.689 "nvme_iov_md": false 00:05:48.689 }, 00:05:48.689 "memory_domains": [ 00:05:48.689 { 00:05:48.689 "dma_device_id": "system", 00:05:48.689 "dma_device_type": 1 00:05:48.689 }, 00:05:48.689 { 00:05:48.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.689 "dma_device_type": 2 00:05:48.689 } 00:05:48.689 ], 00:05:48.689 "driver_specific": { 00:05:48.689 "passthru": { 00:05:48.689 "name": "Passthru0", 00:05:48.689 "base_bdev_name": "Malloc2" 00:05:48.689 } 00:05:48.689 } 00:05:48.689 } 00:05:48.689 ]' 00:05:48.689 22:01:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.689 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.953 ************************************ 00:05:48.953 END TEST rpc_daemon_integrity 00:05:48.953 ************************************ 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.953 00:05:48.953 real 0m0.232s 00:05:48.953 user 0m0.128s 00:05:48.953 sys 0m0.035s 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.953 22:01:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.953 22:01:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.953 22:01:55 rpc -- rpc/rpc.sh@84 -- # killprocess 71166 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 71166 ']' 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@958 -- # kill -0 71166 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71166 00:05:48.953 killing process with pid 71166 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71166' 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@973 -- # kill 71166 00:05:48.953 22:01:55 rpc -- common/autotest_common.sh@978 -- # wait 71166 00:05:49.211 00:05:49.211 real 0m2.264s 00:05:49.211 user 0m2.737s 00:05:49.211 sys 0m0.539s 00:05:49.211 ************************************ 00:05:49.211 END TEST rpc 00:05:49.211 ************************************ 00:05:49.211 22:01:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.211 22:01:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.211 22:01:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.211 22:01:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.211 22:01:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.211 22:01:55 -- common/autotest_common.sh@10 -- # set +x 00:05:49.211 ************************************ 00:05:49.211 START TEST skip_rpc 00:05:49.211 ************************************ 00:05:49.211 22:01:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:49.211 * Looking for test storage... 00:05:49.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.211 22:01:55 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.211 22:01:55 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.211 22:01:55 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.470 22:01:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.470 --rc genhtml_branch_coverage=1 00:05:49.470 --rc genhtml_function_coverage=1 00:05:49.470 --rc genhtml_legend=1 00:05:49.470 --rc geninfo_all_blocks=1 00:05:49.470 --rc geninfo_unexecuted_blocks=1 00:05:49.470 00:05:49.470 ' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.470 --rc genhtml_branch_coverage=1 00:05:49.470 --rc genhtml_function_coverage=1 00:05:49.470 --rc genhtml_legend=1 00:05:49.470 --rc geninfo_all_blocks=1 00:05:49.470 --rc geninfo_unexecuted_blocks=1 00:05:49.470 00:05:49.470 ' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.470 --rc genhtml_branch_coverage=1 00:05:49.470 --rc genhtml_function_coverage=1 00:05:49.470 --rc genhtml_legend=1 00:05:49.470 --rc geninfo_all_blocks=1 00:05:49.470 --rc geninfo_unexecuted_blocks=1 00:05:49.470 00:05:49.470 ' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.470 --rc genhtml_branch_coverage=1 00:05:49.470 --rc genhtml_function_coverage=1 00:05:49.470 --rc genhtml_legend=1 00:05:49.470 --rc geninfo_all_blocks=1 00:05:49.470 --rc geninfo_unexecuted_blocks=1 00:05:49.470 00:05:49.470 ' 00:05:49.470 22:01:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.470 22:01:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:49.470 22:01:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.470 22:01:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.470 ************************************ 00:05:49.470 START TEST skip_rpc 00:05:49.470 ************************************ 00:05:49.470 22:01:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:49.470 22:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71362 00:05:49.470 22:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.470 22:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.470 22:01:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.470 [2024-12-16 22:01:55.679764] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:49.470 [2024-12-16 22:01:55.679896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71362 ] 00:05:49.729 [2024-12-16 22:01:55.838459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.729 [2024-12-16 22:01:55.857062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71362 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71362 ']' 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71362 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71362 00:05:54.993 killing process with pid 71362 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.993 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.994 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71362' 00:05:54.994 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71362 00:05:54.994 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71362 00:05:54.994 00:05:54.994 real 0m5.251s 00:05:54.994 user 0m4.934s 00:05:54.994 sys 0m0.222s 00:05:54.994 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.994 22:02:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.994 ************************************ 00:05:54.994 END TEST skip_rpc 00:05:54.994 ************************************ 00:05:54.994 22:02:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.994 22:02:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.994 22:02:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.994 22:02:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.994 ************************************ 00:05:54.994 START TEST skip_rpc_with_json 00:05:54.994 ************************************ 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71450 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71450 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71450 ']' 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.994 22:02:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.994 [2024-12-16 22:02:00.969056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:54.994 [2024-12-16 22:02:00.969166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71450 ] 00:05:54.994 [2024-12-16 22:02:01.125679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.994 [2024-12-16 22:02:01.142220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.560 [2024-12-16 22:02:01.803946] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.560 request: 00:05:55.560 { 00:05:55.560 "trtype": "tcp", 00:05:55.560 "method": "nvmf_get_transports", 00:05:55.560 "req_id": 1 00:05:55.560 } 00:05:55.560 Got JSON-RPC error response 00:05:55.560 response: 00:05:55.560 { 00:05:55.560 "code": -19, 00:05:55.560 "message": "No such device" 00:05:55.560 } 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.560 [2024-12-16 22:02:01.816043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.560 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.819 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.819 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.819 { 00:05:55.819 "subsystems": [ 00:05:55.819 { 00:05:55.819 "subsystem": "fsdev", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "fsdev_set_opts", 00:05:55.819 "params": { 00:05:55.819 "fsdev_io_pool_size": 65535, 00:05:55.819 "fsdev_io_cache_size": 256 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "keyring", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "iobuf", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "iobuf_set_options", 00:05:55.819 "params": { 00:05:55.819 "small_pool_count": 8192, 00:05:55.819 "large_pool_count": 1024, 00:05:55.819 "small_bufsize": 8192, 00:05:55.819 "large_bufsize": 135168, 00:05:55.819 "enable_numa": false 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "sock", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "sock_set_default_impl", 00:05:55.819 "params": { 00:05:55.819 "impl_name": "posix" 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "sock_impl_set_options", 00:05:55.819 "params": { 00:05:55.819 "impl_name": "ssl", 00:05:55.819 "recv_buf_size": 4096, 00:05:55.819 "send_buf_size": 4096, 00:05:55.819 "enable_recv_pipe": true, 00:05:55.819 "enable_quickack": false, 00:05:55.819 "enable_placement_id": 0, 00:05:55.819 "enable_zerocopy_send_server": true, 00:05:55.819 "enable_zerocopy_send_client": false, 00:05:55.819 "zerocopy_threshold": 0, 00:05:55.819 "tls_version": 0, 00:05:55.819 "enable_ktls": false 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "sock_impl_set_options", 00:05:55.819 "params": { 00:05:55.819 "impl_name": "posix", 00:05:55.819 "recv_buf_size": 2097152, 00:05:55.819 "send_buf_size": 2097152, 00:05:55.819 "enable_recv_pipe": true, 00:05:55.819 "enable_quickack": false, 00:05:55.819 "enable_placement_id": 0, 00:05:55.819 "enable_zerocopy_send_server": true, 00:05:55.819 "enable_zerocopy_send_client": false, 00:05:55.819 "zerocopy_threshold": 0, 00:05:55.819 "tls_version": 0, 00:05:55.819 "enable_ktls": false 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "vmd", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "accel", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "accel_set_options", 00:05:55.819 "params": { 00:05:55.819 "small_cache_size": 128, 00:05:55.819 "large_cache_size": 16, 00:05:55.819 "task_count": 2048, 00:05:55.819 "sequence_count": 2048, 00:05:55.819 "buf_count": 2048 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "bdev", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "bdev_set_options", 00:05:55.819 "params": { 00:05:55.819 "bdev_io_pool_size": 65535, 00:05:55.819 "bdev_io_cache_size": 256, 00:05:55.819 "bdev_auto_examine": true, 00:05:55.819 "iobuf_small_cache_size": 128, 00:05:55.819 "iobuf_large_cache_size": 16 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "bdev_raid_set_options", 00:05:55.819 "params": { 00:05:55.819 "process_window_size_kb": 1024, 00:05:55.819 "process_max_bandwidth_mb_sec": 0 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "bdev_iscsi_set_options", 00:05:55.819 "params": { 00:05:55.819 "timeout_sec": 30 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "bdev_nvme_set_options", 00:05:55.819 "params": { 00:05:55.819 "action_on_timeout": "none", 00:05:55.819 "timeout_us": 0, 00:05:55.819 "timeout_admin_us": 0, 00:05:55.819 "keep_alive_timeout_ms": 10000, 00:05:55.819 "arbitration_burst": 0, 00:05:55.819 "low_priority_weight": 0, 00:05:55.819 "medium_priority_weight": 0, 00:05:55.819 "high_priority_weight": 0, 00:05:55.819 "nvme_adminq_poll_period_us": 10000, 00:05:55.819 "nvme_ioq_poll_period_us": 0, 00:05:55.819 "io_queue_requests": 0, 00:05:55.819 "delay_cmd_submit": true, 00:05:55.819 "transport_retry_count": 4, 00:05:55.819 "bdev_retry_count": 3, 00:05:55.819 "transport_ack_timeout": 0, 00:05:55.819 "ctrlr_loss_timeout_sec": 0, 00:05:55.819 "reconnect_delay_sec": 0, 00:05:55.819 "fast_io_fail_timeout_sec": 0, 00:05:55.819 "disable_auto_failback": false, 00:05:55.819 "generate_uuids": false, 00:05:55.819 "transport_tos": 0, 00:05:55.819 "nvme_error_stat": false, 00:05:55.819 "rdma_srq_size": 0, 00:05:55.819 "io_path_stat": false, 00:05:55.819 "allow_accel_sequence": false, 00:05:55.819 "rdma_max_cq_size": 0, 00:05:55.819 "rdma_cm_event_timeout_ms": 0, 00:05:55.819 "dhchap_digests": [ 00:05:55.819 "sha256", 00:05:55.819 "sha384", 00:05:55.819 "sha512" 00:05:55.819 ], 00:05:55.819 "dhchap_dhgroups": [ 00:05:55.819 "null", 00:05:55.819 "ffdhe2048", 00:05:55.819 "ffdhe3072", 00:05:55.819 "ffdhe4096", 00:05:55.819 "ffdhe6144", 00:05:55.819 "ffdhe8192" 00:05:55.819 ], 00:05:55.819 "rdma_umr_per_io": false 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "bdev_nvme_set_hotplug", 00:05:55.819 "params": { 00:05:55.819 "period_us": 100000, 00:05:55.819 "enable": false 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "bdev_wait_for_examine" 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "scsi", 00:05:55.819 "config": null 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "scheduler", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "framework_set_scheduler", 00:05:55.819 "params": { 00:05:55.819 "name": "static" 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "vhost_scsi", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "vhost_blk", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "ublk", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "nbd", 00:05:55.819 "config": [] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "nvmf", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "nvmf_set_config", 00:05:55.819 "params": { 00:05:55.819 "discovery_filter": "match_any", 00:05:55.819 "admin_cmd_passthru": { 00:05:55.819 "identify_ctrlr": false 00:05:55.819 }, 00:05:55.819 "dhchap_digests": [ 00:05:55.819 "sha256", 00:05:55.819 "sha384", 00:05:55.819 "sha512" 00:05:55.819 ], 00:05:55.819 "dhchap_dhgroups": [ 00:05:55.819 "null", 00:05:55.819 "ffdhe2048", 00:05:55.819 "ffdhe3072", 00:05:55.819 "ffdhe4096", 00:05:55.819 "ffdhe6144", 00:05:55.819 "ffdhe8192" 00:05:55.819 ] 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "nvmf_set_max_subsystems", 00:05:55.819 "params": { 00:05:55.819 "max_subsystems": 1024 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "nvmf_set_crdt", 00:05:55.819 "params": { 00:05:55.819 "crdt1": 0, 00:05:55.819 "crdt2": 0, 00:05:55.819 "crdt3": 0 00:05:55.819 } 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "method": "nvmf_create_transport", 00:05:55.819 "params": { 00:05:55.819 "trtype": "TCP", 00:05:55.819 "max_queue_depth": 128, 00:05:55.819 "max_io_qpairs_per_ctrlr": 127, 00:05:55.819 "in_capsule_data_size": 4096, 00:05:55.819 "max_io_size": 131072, 00:05:55.819 "io_unit_size": 131072, 00:05:55.819 "max_aq_depth": 128, 00:05:55.819 "num_shared_buffers": 511, 00:05:55.819 "buf_cache_size": 4294967295, 00:05:55.819 "dif_insert_or_strip": false, 00:05:55.819 "zcopy": false, 00:05:55.819 "c2h_success": true, 00:05:55.819 "sock_priority": 0, 00:05:55.819 "abort_timeout_sec": 1, 00:05:55.819 "ack_timeout": 0, 00:05:55.819 "data_wr_pool_size": 0 00:05:55.819 } 00:05:55.819 } 00:05:55.819 ] 00:05:55.819 }, 00:05:55.819 { 00:05:55.819 "subsystem": "iscsi", 00:05:55.819 "config": [ 00:05:55.819 { 00:05:55.819 "method": "iscsi_set_options", 00:05:55.819 "params": { 00:05:55.819 "node_base": "iqn.2016-06.io.spdk", 00:05:55.819 "max_sessions": 128, 00:05:55.819 "max_connections_per_session": 2, 00:05:55.819 "max_queue_depth": 64, 00:05:55.819 "default_time2wait": 2, 00:05:55.819 "default_time2retain": 20, 00:05:55.819 "first_burst_length": 8192, 00:05:55.819 "immediate_data": true, 00:05:55.819 "allow_duplicated_isid": false, 00:05:55.819 "error_recovery_level": 0, 00:05:55.820 "nop_timeout": 60, 00:05:55.820 "nop_in_interval": 30, 00:05:55.820 "disable_chap": false, 00:05:55.820 "require_chap": false, 00:05:55.820 "mutual_chap": false, 00:05:55.820 "chap_group": 0, 00:05:55.820 "max_large_datain_per_connection": 64, 00:05:55.820 "max_r2t_per_connection": 4, 00:05:55.820 "pdu_pool_size": 36864, 00:05:55.820 "immediate_data_pool_size": 16384, 00:05:55.820 "data_out_pool_size": 2048 00:05:55.820 } 00:05:55.820 } 00:05:55.820 ] 00:05:55.820 } 00:05:55.820 ] 00:05:55.820 } 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71450 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71450 ']' 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71450 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.820 22:02:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71450 00:05:55.820 killing process with pid 71450 00:05:55.820 22:02:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.820 22:02:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.820 22:02:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71450' 00:05:55.820 22:02:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71450 00:05:55.820 22:02:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71450 00:05:56.078 22:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71473 00:05:56.078 22:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.078 22:02:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71473 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71473 ']' 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71473 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71473 00:06:01.341 killing process with pid 71473 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71473' 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71473 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71473 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.341 00:06:01.341 real 0m6.580s 00:06:01.341 user 0m6.322s 00:06:01.341 sys 0m0.493s 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.341 ************************************ 00:06:01.341 END TEST skip_rpc_with_json 00:06:01.341 ************************************ 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.341 22:02:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.341 22:02:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.341 22:02:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.341 22:02:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.341 ************************************ 00:06:01.341 START TEST skip_rpc_with_delay 00:06:01.341 ************************************ 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.341 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.342 [2024-12-16 22:02:07.597705] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:01.342 ************************************ 00:06:01.342 END TEST skip_rpc_with_delay 00:06:01.342 ************************************ 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.342 00:06:01.342 real 0m0.100s 00:06:01.342 user 0m0.052s 00:06:01.342 sys 0m0.048s 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.342 22:02:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.342 22:02:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.342 22:02:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.342 22:02:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.342 22:02:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.342 22:02:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.342 22:02:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.600 ************************************ 00:06:01.600 START TEST exit_on_failed_rpc_init 00:06:01.600 ************************************ 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:01.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71584 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71584 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71584 ']' 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.600 22:02:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.600 [2024-12-16 22:02:07.769677] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:01.600 [2024-12-16 22:02:07.769963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71584 ] 00:06:01.600 [2024-12-16 22:02:07.925815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.600 [2024-12-16 22:02:07.942293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:02.533 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.533 [2024-12-16 22:02:08.675184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:02.533 [2024-12-16 22:02:08.675694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71602 ] 00:06:02.534 [2024-12-16 22:02:08.829201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.534 [2024-12-16 22:02:08.847039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.534 [2024-12-16 22:02:08.847108] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.534 [2024-12-16 22:02:08.847123] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.534 [2024-12-16 22:02:08.847140] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71584 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71584 ']' 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71584 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71584 00:06:02.792 killing process with pid 71584 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71584' 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71584 00:06:02.792 22:02:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71584 00:06:03.050 00:06:03.050 real 0m1.460s 00:06:03.050 user 0m1.606s 00:06:03.050 sys 0m0.356s 00:06:03.050 22:02:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.050 ************************************ 00:06:03.050 END TEST exit_on_failed_rpc_init 00:06:03.050 ************************************ 00:06:03.050 22:02:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.050 22:02:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.050 ************************************ 00:06:03.050 END TEST skip_rpc 00:06:03.050 ************************************ 00:06:03.050 00:06:03.050 real 0m13.753s 00:06:03.050 user 0m13.053s 00:06:03.050 sys 0m1.286s 00:06:03.050 22:02:09 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.050 22:02:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.050 22:02:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.050 22:02:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.050 22:02:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.050 22:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.050 ************************************ 00:06:03.050 START TEST rpc_client 00:06:03.050 ************************************ 00:06:03.050 22:02:09 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:03.050 * Looking for test storage... 00:06:03.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:03.050 22:02:09 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.050 22:02:09 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.050 22:02:09 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.050 22:02:09 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.050 22:02:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.051 22:02:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.051 22:02:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.051 22:02:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.309 22:02:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.309 --rc genhtml_branch_coverage=1 00:06:03.309 --rc genhtml_function_coverage=1 00:06:03.309 --rc genhtml_legend=1 00:06:03.309 --rc geninfo_all_blocks=1 00:06:03.309 --rc geninfo_unexecuted_blocks=1 00:06:03.309 00:06:03.309 ' 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.309 --rc genhtml_branch_coverage=1 00:06:03.309 --rc genhtml_function_coverage=1 00:06:03.309 --rc genhtml_legend=1 00:06:03.309 --rc geninfo_all_blocks=1 00:06:03.309 --rc geninfo_unexecuted_blocks=1 00:06:03.309 00:06:03.309 ' 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.309 --rc genhtml_branch_coverage=1 00:06:03.309 --rc genhtml_function_coverage=1 00:06:03.309 --rc genhtml_legend=1 00:06:03.309 --rc geninfo_all_blocks=1 00:06:03.309 --rc geninfo_unexecuted_blocks=1 00:06:03.309 00:06:03.309 ' 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.309 --rc genhtml_branch_coverage=1 00:06:03.309 --rc genhtml_function_coverage=1 00:06:03.309 --rc genhtml_legend=1 00:06:03.309 --rc geninfo_all_blocks=1 00:06:03.309 --rc geninfo_unexecuted_blocks=1 00:06:03.309 00:06:03.309 ' 00:06:03.309 22:02:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:03.309 OK 00:06:03.309 22:02:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.309 00:06:03.309 real 0m0.182s 00:06:03.309 user 0m0.098s 00:06:03.309 sys 0m0.090s 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.309 ************************************ 00:06:03.309 END TEST rpc_client 00:06:03.309 ************************************ 00:06:03.309 22:02:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.309 22:02:09 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.309 22:02:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.309 22:02:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.309 22:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.309 ************************************ 00:06:03.309 START TEST json_config 00:06:03.309 ************************************ 00:06:03.309 22:02:09 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:03.309 22:02:09 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.309 22:02:09 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.309 22:02:09 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.309 22:02:09 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.309 22:02:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.309 22:02:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.309 22:02:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.309 22:02:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.309 22:02:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.309 22:02:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.309 22:02:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.309 22:02:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.309 22:02:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.309 22:02:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.309 22:02:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.309 22:02:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:03.309 22:02:09 json_config -- scripts/common.sh@345 -- # : 1 00:06:03.309 22:02:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.309 22:02:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.310 22:02:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:03.310 22:02:09 json_config -- scripts/common.sh@353 -- # local d=1 00:06:03.310 22:02:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.310 22:02:09 json_config -- scripts/common.sh@355 -- # echo 1 00:06:03.310 22:02:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.310 22:02:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:03.310 22:02:09 json_config -- scripts/common.sh@353 -- # local d=2 00:06:03.310 22:02:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.310 22:02:09 json_config -- scripts/common.sh@355 -- # echo 2 00:06:03.310 22:02:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.310 22:02:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.310 22:02:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.310 22:02:09 json_config -- scripts/common.sh@368 -- # return 0 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.310 --rc genhtml_branch_coverage=1 00:06:03.310 --rc genhtml_function_coverage=1 00:06:03.310 --rc genhtml_legend=1 00:06:03.310 --rc geninfo_all_blocks=1 00:06:03.310 --rc geninfo_unexecuted_blocks=1 00:06:03.310 00:06:03.310 ' 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.310 --rc genhtml_branch_coverage=1 00:06:03.310 --rc genhtml_function_coverage=1 00:06:03.310 --rc genhtml_legend=1 00:06:03.310 --rc geninfo_all_blocks=1 00:06:03.310 --rc geninfo_unexecuted_blocks=1 00:06:03.310 00:06:03.310 ' 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.310 --rc genhtml_branch_coverage=1 00:06:03.310 --rc genhtml_function_coverage=1 00:06:03.310 --rc genhtml_legend=1 00:06:03.310 --rc geninfo_all_blocks=1 00:06:03.310 --rc geninfo_unexecuted_blocks=1 00:06:03.310 00:06:03.310 ' 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.310 --rc genhtml_branch_coverage=1 00:06:03.310 --rc genhtml_function_coverage=1 00:06:03.310 --rc genhtml_legend=1 00:06:03.310 --rc geninfo_all_blocks=1 00:06:03.310 --rc geninfo_unexecuted_blocks=1 00:06:03.310 00:06:03.310 ' 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86349a24-162f-435c-aa93-39d31211c65f 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=86349a24-162f-435c-aa93-39d31211c65f 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.310 22:02:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.310 22:02:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.310 22:02:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.310 22:02:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.310 22:02:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.310 22:02:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.310 22:02:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.310 22:02:09 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.310 22:02:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@51 -- # : 0 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.310 22:02:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.310 WARNING: No tests are enabled so not running JSON configuration tests 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:03.310 22:02:09 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:03.310 00:06:03.310 real 0m0.136s 00:06:03.310 user 0m0.090s 00:06:03.310 sys 0m0.051s 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.310 22:02:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.310 ************************************ 00:06:03.310 END TEST json_config 00:06:03.310 ************************************ 00:06:03.571 22:02:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.571 22:02:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.571 22:02:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.571 22:02:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.571 ************************************ 00:06:03.571 START TEST json_config_extra_key 00:06:03.571 ************************************ 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.571 --rc genhtml_branch_coverage=1 00:06:03.571 --rc genhtml_function_coverage=1 00:06:03.571 --rc genhtml_legend=1 00:06:03.571 --rc geninfo_all_blocks=1 00:06:03.571 --rc geninfo_unexecuted_blocks=1 00:06:03.571 00:06:03.571 ' 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.571 --rc genhtml_branch_coverage=1 00:06:03.571 --rc genhtml_function_coverage=1 00:06:03.571 --rc genhtml_legend=1 00:06:03.571 --rc geninfo_all_blocks=1 00:06:03.571 --rc geninfo_unexecuted_blocks=1 00:06:03.571 00:06:03.571 ' 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.571 --rc genhtml_branch_coverage=1 00:06:03.571 --rc genhtml_function_coverage=1 00:06:03.571 --rc genhtml_legend=1 00:06:03.571 --rc geninfo_all_blocks=1 00:06:03.571 --rc geninfo_unexecuted_blocks=1 00:06:03.571 00:06:03.571 ' 00:06:03.571 22:02:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.571 --rc genhtml_branch_coverage=1 00:06:03.571 --rc genhtml_function_coverage=1 00:06:03.571 --rc genhtml_legend=1 00:06:03.571 --rc geninfo_all_blocks=1 00:06:03.571 --rc geninfo_unexecuted_blocks=1 00:06:03.571 00:06:03.571 ' 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:86349a24-162f-435c-aa93-39d31211c65f 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=86349a24-162f-435c-aa93-39d31211c65f 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.571 22:02:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.571 22:02:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.571 22:02:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.571 22:02:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.571 22:02:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:03.571 22:02:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.571 22:02:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:03.571 INFO: launching applications... 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:03.571 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.572 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:03.572 22:02:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71779 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.572 Waiting for target to run... 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71779 /var/tmp/spdk_tgt.sock 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71779 ']' 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.572 22:02:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.572 22:02:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:03.572 [2024-12-16 22:02:09.902817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:03.572 [2024-12-16 22:02:09.902961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71779 ] 00:06:04.142 [2024-12-16 22:02:10.231716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.142 [2024-12-16 22:02:10.245546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.401 22:02:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.401 00:06:04.401 22:02:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:04.401 INFO: shutting down applications... 00:06:04.401 22:02:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:04.401 22:02:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71779 ]] 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71779 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71779 00:06:04.401 22:02:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71779 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.970 SPDK target shutdown done 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.970 22:02:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.970 Success 00:06:04.970 22:02:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.970 00:06:04.970 real 0m1.569s 00:06:04.970 user 0m1.328s 00:06:04.970 sys 0m0.383s 00:06:04.970 22:02:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.970 ************************************ 00:06:04.970 END TEST json_config_extra_key 00:06:04.970 ************************************ 00:06:04.970 22:02:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.970 22:02:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.970 22:02:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.970 22:02:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.970 22:02:11 -- common/autotest_common.sh@10 -- # set +x 00:06:05.229 ************************************ 00:06:05.229 START TEST alias_rpc 00:06:05.229 ************************************ 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.229 * Looking for test storage... 00:06:05.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.229 22:02:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.229 --rc genhtml_branch_coverage=1 00:06:05.229 --rc genhtml_function_coverage=1 00:06:05.229 --rc genhtml_legend=1 00:06:05.229 --rc geninfo_all_blocks=1 00:06:05.229 --rc geninfo_unexecuted_blocks=1 00:06:05.229 00:06:05.229 ' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.229 --rc genhtml_branch_coverage=1 00:06:05.229 --rc genhtml_function_coverage=1 00:06:05.229 --rc genhtml_legend=1 00:06:05.229 --rc geninfo_all_blocks=1 00:06:05.229 --rc geninfo_unexecuted_blocks=1 00:06:05.229 00:06:05.229 ' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.229 --rc genhtml_branch_coverage=1 00:06:05.229 --rc genhtml_function_coverage=1 00:06:05.229 --rc genhtml_legend=1 00:06:05.229 --rc geninfo_all_blocks=1 00:06:05.229 --rc geninfo_unexecuted_blocks=1 00:06:05.229 00:06:05.229 ' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.229 --rc genhtml_branch_coverage=1 00:06:05.229 --rc genhtml_function_coverage=1 00:06:05.229 --rc genhtml_legend=1 00:06:05.229 --rc geninfo_all_blocks=1 00:06:05.229 --rc geninfo_unexecuted_blocks=1 00:06:05.229 00:06:05.229 ' 00:06:05.229 22:02:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.229 22:02:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71853 00:06:05.229 22:02:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71853 00:06:05.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71853 ']' 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.229 22:02:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.229 22:02:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.229 [2024-12-16 22:02:11.548647] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:05.229 [2024-12-16 22:02:11.549223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71853 ] 00:06:05.488 [2024-12-16 22:02:11.712112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.488 [2024-12-16 22:02:11.732112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.058 22:02:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.058 22:02:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.058 22:02:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:06.318 22:02:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71853 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71853 ']' 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71853 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71853 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71853' 00:06:06.318 killing process with pid 71853 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 71853 00:06:06.318 22:02:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 71853 00:06:06.577 00:06:06.577 real 0m1.585s 00:06:06.577 user 0m1.722s 00:06:06.577 sys 0m0.387s 00:06:06.577 ************************************ 00:06:06.577 END TEST alias_rpc 00:06:06.577 22:02:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.577 22:02:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.577 ************************************ 00:06:06.836 22:02:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.836 22:02:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.836 22:02:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.836 22:02:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.836 22:02:12 -- common/autotest_common.sh@10 -- # set +x 00:06:06.836 ************************************ 00:06:06.836 START TEST spdkcli_tcp 00:06:06.836 ************************************ 00:06:06.836 22:02:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.836 * Looking for test storage... 00:06:06.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.836 22:02:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.836 --rc genhtml_branch_coverage=1 00:06:06.836 --rc genhtml_function_coverage=1 00:06:06.836 --rc genhtml_legend=1 00:06:06.836 --rc geninfo_all_blocks=1 00:06:06.836 --rc geninfo_unexecuted_blocks=1 00:06:06.836 00:06:06.836 ' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.836 --rc genhtml_branch_coverage=1 00:06:06.836 --rc genhtml_function_coverage=1 00:06:06.836 --rc genhtml_legend=1 00:06:06.836 --rc geninfo_all_blocks=1 00:06:06.836 --rc geninfo_unexecuted_blocks=1 00:06:06.836 00:06:06.836 ' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.836 --rc genhtml_branch_coverage=1 00:06:06.836 --rc genhtml_function_coverage=1 00:06:06.836 --rc genhtml_legend=1 00:06:06.836 --rc geninfo_all_blocks=1 00:06:06.836 --rc geninfo_unexecuted_blocks=1 00:06:06.836 00:06:06.836 ' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.836 --rc genhtml_branch_coverage=1 00:06:06.836 --rc genhtml_function_coverage=1 00:06:06.836 --rc genhtml_legend=1 00:06:06.836 --rc geninfo_all_blocks=1 00:06:06.836 --rc geninfo_unexecuted_blocks=1 00:06:06.836 00:06:06.836 ' 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71937 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71937 00:06:06.836 22:02:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71937 ']' 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.836 22:02:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.836 [2024-12-16 22:02:13.178640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.836 [2024-12-16 22:02:13.178757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71937 ] 00:06:07.097 [2024-12-16 22:02:13.336724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.097 [2024-12-16 22:02:13.366812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.097 [2024-12-16 22:02:13.366909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.038 22:02:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.038 22:02:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:08.038 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:08.038 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71949 00:06:08.038 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:08.038 [ 00:06:08.038 "bdev_malloc_delete", 00:06:08.038 "bdev_malloc_create", 00:06:08.038 "bdev_null_resize", 00:06:08.038 "bdev_null_delete", 00:06:08.038 "bdev_null_create", 00:06:08.038 "bdev_nvme_cuse_unregister", 00:06:08.038 "bdev_nvme_cuse_register", 00:06:08.038 "bdev_opal_new_user", 00:06:08.038 "bdev_opal_set_lock_state", 00:06:08.038 "bdev_opal_delete", 00:06:08.038 "bdev_opal_get_info", 00:06:08.038 "bdev_opal_create", 00:06:08.038 "bdev_nvme_opal_revert", 00:06:08.038 "bdev_nvme_opal_init", 00:06:08.038 "bdev_nvme_send_cmd", 00:06:08.038 "bdev_nvme_set_keys", 00:06:08.038 "bdev_nvme_get_path_iostat", 00:06:08.038 "bdev_nvme_get_mdns_discovery_info", 00:06:08.038 "bdev_nvme_stop_mdns_discovery", 00:06:08.038 "bdev_nvme_start_mdns_discovery", 00:06:08.038 "bdev_nvme_set_multipath_policy", 00:06:08.038 "bdev_nvme_set_preferred_path", 00:06:08.038 "bdev_nvme_get_io_paths", 00:06:08.038 "bdev_nvme_remove_error_injection", 00:06:08.038 "bdev_nvme_add_error_injection", 00:06:08.038 "bdev_nvme_get_discovery_info", 00:06:08.038 "bdev_nvme_stop_discovery", 00:06:08.038 "bdev_nvme_start_discovery", 00:06:08.038 "bdev_nvme_get_controller_health_info", 00:06:08.038 "bdev_nvme_disable_controller", 00:06:08.038 "bdev_nvme_enable_controller", 00:06:08.038 "bdev_nvme_reset_controller", 00:06:08.038 "bdev_nvme_get_transport_statistics", 00:06:08.038 "bdev_nvme_apply_firmware", 00:06:08.038 "bdev_nvme_detach_controller", 00:06:08.038 "bdev_nvme_get_controllers", 00:06:08.038 "bdev_nvme_attach_controller", 00:06:08.038 "bdev_nvme_set_hotplug", 00:06:08.038 "bdev_nvme_set_options", 00:06:08.038 "bdev_passthru_delete", 00:06:08.038 "bdev_passthru_create", 00:06:08.038 "bdev_lvol_set_parent_bdev", 00:06:08.039 "bdev_lvol_set_parent", 00:06:08.039 "bdev_lvol_check_shallow_copy", 00:06:08.039 "bdev_lvol_start_shallow_copy", 00:06:08.039 "bdev_lvol_grow_lvstore", 00:06:08.039 "bdev_lvol_get_lvols", 00:06:08.039 "bdev_lvol_get_lvstores", 00:06:08.039 "bdev_lvol_delete", 00:06:08.039 "bdev_lvol_set_read_only", 00:06:08.039 "bdev_lvol_resize", 00:06:08.039 "bdev_lvol_decouple_parent", 00:06:08.039 "bdev_lvol_inflate", 00:06:08.039 "bdev_lvol_rename", 00:06:08.039 "bdev_lvol_clone_bdev", 00:06:08.039 "bdev_lvol_clone", 00:06:08.039 "bdev_lvol_snapshot", 00:06:08.039 "bdev_lvol_create", 00:06:08.039 "bdev_lvol_delete_lvstore", 00:06:08.039 "bdev_lvol_rename_lvstore", 00:06:08.039 "bdev_lvol_create_lvstore", 00:06:08.039 "bdev_raid_set_options", 00:06:08.039 "bdev_raid_remove_base_bdev", 00:06:08.039 "bdev_raid_add_base_bdev", 00:06:08.039 "bdev_raid_delete", 00:06:08.039 "bdev_raid_create", 00:06:08.039 "bdev_raid_get_bdevs", 00:06:08.039 "bdev_error_inject_error", 00:06:08.039 "bdev_error_delete", 00:06:08.039 "bdev_error_create", 00:06:08.039 "bdev_split_delete", 00:06:08.039 "bdev_split_create", 00:06:08.039 "bdev_delay_delete", 00:06:08.039 "bdev_delay_create", 00:06:08.039 "bdev_delay_update_latency", 00:06:08.039 "bdev_zone_block_delete", 00:06:08.039 "bdev_zone_block_create", 00:06:08.039 "blobfs_create", 00:06:08.039 "blobfs_detect", 00:06:08.039 "blobfs_set_cache_size", 00:06:08.039 "bdev_xnvme_delete", 00:06:08.039 "bdev_xnvme_create", 00:06:08.039 "bdev_aio_delete", 00:06:08.039 "bdev_aio_rescan", 00:06:08.039 "bdev_aio_create", 00:06:08.039 "bdev_ftl_set_property", 00:06:08.039 "bdev_ftl_get_properties", 00:06:08.039 "bdev_ftl_get_stats", 00:06:08.039 "bdev_ftl_unmap", 00:06:08.039 "bdev_ftl_unload", 00:06:08.039 "bdev_ftl_delete", 00:06:08.039 "bdev_ftl_load", 00:06:08.039 "bdev_ftl_create", 00:06:08.039 "bdev_virtio_attach_controller", 00:06:08.039 "bdev_virtio_scsi_get_devices", 00:06:08.039 "bdev_virtio_detach_controller", 00:06:08.039 "bdev_virtio_blk_set_hotplug", 00:06:08.039 "bdev_iscsi_delete", 00:06:08.039 "bdev_iscsi_create", 00:06:08.039 "bdev_iscsi_set_options", 00:06:08.039 "accel_error_inject_error", 00:06:08.039 "ioat_scan_accel_module", 00:06:08.039 "dsa_scan_accel_module", 00:06:08.039 "iaa_scan_accel_module", 00:06:08.039 "keyring_file_remove_key", 00:06:08.039 "keyring_file_add_key", 00:06:08.039 "keyring_linux_set_options", 00:06:08.039 "fsdev_aio_delete", 00:06:08.039 "fsdev_aio_create", 00:06:08.039 "iscsi_get_histogram", 00:06:08.039 "iscsi_enable_histogram", 00:06:08.039 "iscsi_set_options", 00:06:08.039 "iscsi_get_auth_groups", 00:06:08.039 "iscsi_auth_group_remove_secret", 00:06:08.039 "iscsi_auth_group_add_secret", 00:06:08.039 "iscsi_delete_auth_group", 00:06:08.039 "iscsi_create_auth_group", 00:06:08.039 "iscsi_set_discovery_auth", 00:06:08.039 "iscsi_get_options", 00:06:08.039 "iscsi_target_node_request_logout", 00:06:08.039 "iscsi_target_node_set_redirect", 00:06:08.039 "iscsi_target_node_set_auth", 00:06:08.039 "iscsi_target_node_add_lun", 00:06:08.039 "iscsi_get_stats", 00:06:08.039 "iscsi_get_connections", 00:06:08.039 "iscsi_portal_group_set_auth", 00:06:08.039 "iscsi_start_portal_group", 00:06:08.039 "iscsi_delete_portal_group", 00:06:08.039 "iscsi_create_portal_group", 00:06:08.039 "iscsi_get_portal_groups", 00:06:08.039 "iscsi_delete_target_node", 00:06:08.039 "iscsi_target_node_remove_pg_ig_maps", 00:06:08.039 "iscsi_target_node_add_pg_ig_maps", 00:06:08.039 "iscsi_create_target_node", 00:06:08.039 "iscsi_get_target_nodes", 00:06:08.039 "iscsi_delete_initiator_group", 00:06:08.039 "iscsi_initiator_group_remove_initiators", 00:06:08.039 "iscsi_initiator_group_add_initiators", 00:06:08.039 "iscsi_create_initiator_group", 00:06:08.039 "iscsi_get_initiator_groups", 00:06:08.039 "nvmf_set_crdt", 00:06:08.039 "nvmf_set_config", 00:06:08.039 "nvmf_set_max_subsystems", 00:06:08.039 "nvmf_stop_mdns_prr", 00:06:08.039 "nvmf_publish_mdns_prr", 00:06:08.039 "nvmf_subsystem_get_listeners", 00:06:08.039 "nvmf_subsystem_get_qpairs", 00:06:08.039 "nvmf_subsystem_get_controllers", 00:06:08.039 "nvmf_get_stats", 00:06:08.039 "nvmf_get_transports", 00:06:08.039 "nvmf_create_transport", 00:06:08.039 "nvmf_get_targets", 00:06:08.039 "nvmf_delete_target", 00:06:08.039 "nvmf_create_target", 00:06:08.039 "nvmf_subsystem_allow_any_host", 00:06:08.039 "nvmf_subsystem_set_keys", 00:06:08.039 "nvmf_subsystem_remove_host", 00:06:08.039 "nvmf_subsystem_add_host", 00:06:08.039 "nvmf_ns_remove_host", 00:06:08.039 "nvmf_ns_add_host", 00:06:08.039 "nvmf_subsystem_remove_ns", 00:06:08.039 "nvmf_subsystem_set_ns_ana_group", 00:06:08.039 "nvmf_subsystem_add_ns", 00:06:08.039 "nvmf_subsystem_listener_set_ana_state", 00:06:08.039 "nvmf_discovery_get_referrals", 00:06:08.039 "nvmf_discovery_remove_referral", 00:06:08.039 "nvmf_discovery_add_referral", 00:06:08.039 "nvmf_subsystem_remove_listener", 00:06:08.039 "nvmf_subsystem_add_listener", 00:06:08.039 "nvmf_delete_subsystem", 00:06:08.039 "nvmf_create_subsystem", 00:06:08.039 "nvmf_get_subsystems", 00:06:08.039 "env_dpdk_get_mem_stats", 00:06:08.039 "nbd_get_disks", 00:06:08.039 "nbd_stop_disk", 00:06:08.039 "nbd_start_disk", 00:06:08.039 "ublk_recover_disk", 00:06:08.039 "ublk_get_disks", 00:06:08.039 "ublk_stop_disk", 00:06:08.039 "ublk_start_disk", 00:06:08.039 "ublk_destroy_target", 00:06:08.039 "ublk_create_target", 00:06:08.039 "virtio_blk_create_transport", 00:06:08.039 "virtio_blk_get_transports", 00:06:08.039 "vhost_controller_set_coalescing", 00:06:08.039 "vhost_get_controllers", 00:06:08.039 "vhost_delete_controller", 00:06:08.039 "vhost_create_blk_controller", 00:06:08.039 "vhost_scsi_controller_remove_target", 00:06:08.039 "vhost_scsi_controller_add_target", 00:06:08.039 "vhost_start_scsi_controller", 00:06:08.039 "vhost_create_scsi_controller", 00:06:08.039 "thread_set_cpumask", 00:06:08.039 "scheduler_set_options", 00:06:08.039 "framework_get_governor", 00:06:08.039 "framework_get_scheduler", 00:06:08.039 "framework_set_scheduler", 00:06:08.039 "framework_get_reactors", 00:06:08.039 "thread_get_io_channels", 00:06:08.039 "thread_get_pollers", 00:06:08.039 "thread_get_stats", 00:06:08.039 "framework_monitor_context_switch", 00:06:08.039 "spdk_kill_instance", 00:06:08.039 "log_enable_timestamps", 00:06:08.039 "log_get_flags", 00:06:08.039 "log_clear_flag", 00:06:08.039 "log_set_flag", 00:06:08.039 "log_get_level", 00:06:08.039 "log_set_level", 00:06:08.039 "log_get_print_level", 00:06:08.039 "log_set_print_level", 00:06:08.039 "framework_enable_cpumask_locks", 00:06:08.039 "framework_disable_cpumask_locks", 00:06:08.039 "framework_wait_init", 00:06:08.039 "framework_start_init", 00:06:08.039 "scsi_get_devices", 00:06:08.039 "bdev_get_histogram", 00:06:08.039 "bdev_enable_histogram", 00:06:08.039 "bdev_set_qos_limit", 00:06:08.039 "bdev_set_qd_sampling_period", 00:06:08.039 "bdev_get_bdevs", 00:06:08.039 "bdev_reset_iostat", 00:06:08.039 "bdev_get_iostat", 00:06:08.039 "bdev_examine", 00:06:08.039 "bdev_wait_for_examine", 00:06:08.039 "bdev_set_options", 00:06:08.039 "accel_get_stats", 00:06:08.039 "accel_set_options", 00:06:08.039 "accel_set_driver", 00:06:08.039 "accel_crypto_key_destroy", 00:06:08.039 "accel_crypto_keys_get", 00:06:08.039 "accel_crypto_key_create", 00:06:08.039 "accel_assign_opc", 00:06:08.039 "accel_get_module_info", 00:06:08.039 "accel_get_opc_assignments", 00:06:08.039 "vmd_rescan", 00:06:08.039 "vmd_remove_device", 00:06:08.039 "vmd_enable", 00:06:08.039 "sock_get_default_impl", 00:06:08.039 "sock_set_default_impl", 00:06:08.039 "sock_impl_set_options", 00:06:08.039 "sock_impl_get_options", 00:06:08.039 "iobuf_get_stats", 00:06:08.039 "iobuf_set_options", 00:06:08.039 "keyring_get_keys", 00:06:08.039 "framework_get_pci_devices", 00:06:08.039 "framework_get_config", 00:06:08.039 "framework_get_subsystems", 00:06:08.039 "fsdev_set_opts", 00:06:08.039 "fsdev_get_opts", 00:06:08.039 "trace_get_info", 00:06:08.039 "trace_get_tpoint_group_mask", 00:06:08.039 "trace_disable_tpoint_group", 00:06:08.039 "trace_enable_tpoint_group", 00:06:08.039 "trace_clear_tpoint_mask", 00:06:08.039 "trace_set_tpoint_mask", 00:06:08.039 "notify_get_notifications", 00:06:08.039 "notify_get_types", 00:06:08.039 "spdk_get_version", 00:06:08.039 "rpc_get_methods" 00:06:08.039 ] 00:06:08.040 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.040 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:08.040 22:02:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71937 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71937 ']' 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71937 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71937 00:06:08.040 killing process with pid 71937 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71937' 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71937 00:06:08.040 22:02:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71937 00:06:08.301 00:06:08.301 real 0m1.628s 00:06:08.301 user 0m2.874s 00:06:08.301 sys 0m0.465s 00:06:08.301 ************************************ 00:06:08.301 END TEST spdkcli_tcp 00:06:08.301 ************************************ 00:06:08.301 22:02:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.301 22:02:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.301 22:02:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.301 22:02:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.301 22:02:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.301 22:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.562 ************************************ 00:06:08.562 START TEST dpdk_mem_utility 00:06:08.562 ************************************ 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.562 * Looking for test storage... 00:06:08.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.562 22:02:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.562 --rc genhtml_branch_coverage=1 00:06:08.562 --rc genhtml_function_coverage=1 00:06:08.562 --rc genhtml_legend=1 00:06:08.562 --rc geninfo_all_blocks=1 00:06:08.562 --rc geninfo_unexecuted_blocks=1 00:06:08.562 00:06:08.562 ' 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.562 --rc genhtml_branch_coverage=1 00:06:08.562 --rc genhtml_function_coverage=1 00:06:08.562 --rc genhtml_legend=1 00:06:08.562 --rc geninfo_all_blocks=1 00:06:08.562 --rc geninfo_unexecuted_blocks=1 00:06:08.562 00:06:08.562 ' 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.562 --rc genhtml_branch_coverage=1 00:06:08.562 --rc genhtml_function_coverage=1 00:06:08.562 --rc genhtml_legend=1 00:06:08.562 --rc geninfo_all_blocks=1 00:06:08.562 --rc geninfo_unexecuted_blocks=1 00:06:08.562 00:06:08.562 ' 00:06:08.562 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.563 --rc genhtml_branch_coverage=1 00:06:08.563 --rc genhtml_function_coverage=1 00:06:08.563 --rc genhtml_legend=1 00:06:08.563 --rc geninfo_all_blocks=1 00:06:08.563 --rc geninfo_unexecuted_blocks=1 00:06:08.563 00:06:08.563 ' 00:06:08.563 22:02:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.563 22:02:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72032 00:06:08.563 22:02:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72032 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 72032 ']' 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.563 22:02:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.563 22:02:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.563 [2024-12-16 22:02:14.868358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:08.563 [2024-12-16 22:02:14.868493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72032 ] 00:06:08.824 [2024-12-16 22:02:15.022546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.824 [2024-12-16 22:02:15.040447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.401 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.401 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:09.401 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.401 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.401 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.401 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.401 { 00:06:09.401 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.401 } 00:06:09.401 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.401 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.665 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:09.665 1 heaps totaling size 818.000000 MiB 00:06:09.665 size: 818.000000 MiB heap id: 0 00:06:09.665 end heaps---------- 00:06:09.665 9 mempools totaling size 603.782043 MiB 00:06:09.665 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.665 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.665 size: 100.555481 MiB name: bdev_io_72032 00:06:09.665 size: 50.003479 MiB name: msgpool_72032 00:06:09.665 size: 36.509338 MiB name: fsdev_io_72032 00:06:09.665 size: 21.763794 MiB name: PDU_Pool 00:06:09.665 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.665 size: 4.133484 MiB name: evtpool_72032 00:06:09.665 size: 0.026123 MiB name: Session_Pool 00:06:09.665 end mempools------- 00:06:09.665 6 memzones totaling size 4.142822 MiB 00:06:09.665 size: 1.000366 MiB name: RG_ring_0_72032 00:06:09.665 size: 1.000366 MiB name: RG_ring_1_72032 00:06:09.665 size: 1.000366 MiB name: RG_ring_4_72032 00:06:09.665 size: 1.000366 MiB name: RG_ring_5_72032 00:06:09.665 size: 0.125366 MiB name: RG_ring_2_72032 00:06:09.665 size: 0.015991 MiB name: RG_ring_3_72032 00:06:09.665 end memzones------- 00:06:09.665 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.665 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:09.665 list of free elements. size: 10.803223 MiB 00:06:09.665 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:09.665 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:09.665 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:09.665 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:09.665 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:09.665 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:09.665 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:09.665 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:09.665 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:06:09.665 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:09.665 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:09.665 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:09.665 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:09.665 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:09.665 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:09.665 list of standard malloc elements. size: 199.267883 MiB 00:06:09.665 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:09.665 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:09.665 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:09.665 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:09.665 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:09.665 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.665 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:09.665 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.665 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:09.665 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:09.665 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:09.665 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:09.666 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:09.666 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:09.667 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:09.667 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:09.667 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:09.667 list of memzone associated elements. size: 607.928894 MiB 00:06:09.667 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:09.667 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.667 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:09.667 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.667 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:09.667 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_72032_0 00:06:09.667 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:09.667 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72032_0 00:06:09.667 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:09.667 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72032_0 00:06:09.667 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:09.667 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.667 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:09.667 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.667 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:09.667 associated memzone info: size: 3.000122 MiB name: MP_evtpool_72032_0 00:06:09.667 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:09.667 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72032 00:06:09.667 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.667 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72032 00:06:09.667 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:09.667 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.667 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:09.667 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.667 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:09.667 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.667 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:09.667 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.667 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:09.667 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72032 00:06:09.667 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:09.667 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72032 00:06:09.667 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:09.667 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72032 00:06:09.667 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:09.667 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72032 00:06:09.667 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:09.667 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72032 00:06:09.667 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:09.667 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72032 00:06:09.667 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:09.667 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.667 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:09.667 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.667 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:09.667 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.667 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:09.667 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_72032 00:06:09.667 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:09.667 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72032 00:06:09.667 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:09.667 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.667 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:09.667 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.667 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:09.667 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72032 00:06:09.667 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:09.667 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.667 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:09.667 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72032 00:06:09.667 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:09.667 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72032 00:06:09.667 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:09.667 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72032 00:06:09.667 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:09.667 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.667 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.667 22:02:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72032 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 72032 ']' 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 72032 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72032 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.668 killing process with pid 72032 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72032' 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 72032 00:06:09.668 22:02:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 72032 00:06:09.929 00:06:09.929 real 0m1.431s 00:06:09.929 user 0m1.500s 00:06:09.929 sys 0m0.348s 00:06:09.929 22:02:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.929 ************************************ 00:06:09.929 END TEST dpdk_mem_utility 00:06:09.929 ************************************ 00:06:09.929 22:02:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.929 22:02:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.929 22:02:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.929 22:02:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.929 22:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.929 ************************************ 00:06:09.929 START TEST event 00:06:09.929 ************************************ 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.929 * Looking for test storage... 00:06:09.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.929 22:02:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.929 22:02:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.929 22:02:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.929 22:02:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.929 22:02:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.929 22:02:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.929 22:02:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.929 22:02:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.929 22:02:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.929 22:02:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.929 22:02:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.929 22:02:16 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.929 22:02:16 event -- scripts/common.sh@345 -- # : 1 00:06:09.929 22:02:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.929 22:02:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.929 22:02:16 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.929 22:02:16 event -- scripts/common.sh@353 -- # local d=1 00:06:09.929 22:02:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.929 22:02:16 event -- scripts/common.sh@355 -- # echo 1 00:06:09.929 22:02:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.929 22:02:16 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.929 22:02:16 event -- scripts/common.sh@353 -- # local d=2 00:06:09.929 22:02:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.929 22:02:16 event -- scripts/common.sh@355 -- # echo 2 00:06:09.929 22:02:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.929 22:02:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.929 22:02:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.929 22:02:16 event -- scripts/common.sh@368 -- # return 0 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.929 22:02:16 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.930 --rc genhtml_branch_coverage=1 00:06:09.930 --rc genhtml_function_coverage=1 00:06:09.930 --rc genhtml_legend=1 00:06:09.930 --rc geninfo_all_blocks=1 00:06:09.930 --rc geninfo_unexecuted_blocks=1 00:06:09.930 00:06:09.930 ' 00:06:09.930 22:02:16 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.930 --rc genhtml_branch_coverage=1 00:06:09.930 --rc genhtml_function_coverage=1 00:06:09.930 --rc genhtml_legend=1 00:06:09.930 --rc geninfo_all_blocks=1 00:06:09.930 --rc geninfo_unexecuted_blocks=1 00:06:09.930 00:06:09.930 ' 00:06:09.930 22:02:16 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.930 --rc genhtml_branch_coverage=1 00:06:09.930 --rc genhtml_function_coverage=1 00:06:09.930 --rc genhtml_legend=1 00:06:09.930 --rc geninfo_all_blocks=1 00:06:09.930 --rc geninfo_unexecuted_blocks=1 00:06:09.930 00:06:09.930 ' 00:06:09.930 22:02:16 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.930 --rc genhtml_branch_coverage=1 00:06:09.930 --rc genhtml_function_coverage=1 00:06:09.930 --rc genhtml_legend=1 00:06:09.930 --rc geninfo_all_blocks=1 00:06:09.930 --rc geninfo_unexecuted_blocks=1 00:06:09.930 00:06:09.930 ' 00:06:09.930 22:02:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.930 22:02:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.930 22:02:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.930 22:02:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:09.930 22:02:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.930 22:02:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.930 ************************************ 00:06:09.930 START TEST event_perf 00:06:09.930 ************************************ 00:06:09.930 22:02:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.191 Running I/O for 1 seconds...[2024-12-16 22:02:16.285108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.191 [2024-12-16 22:02:16.285211] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72113 ] 00:06:10.191 [2024-12-16 22:02:16.440090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.191 [2024-12-16 22:02:16.462373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.192 [2024-12-16 22:02:16.462666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.192 [2024-12-16 22:02:16.462742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.192 [2024-12-16 22:02:16.462767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.578 Running I/O for 1 seconds... 00:06:11.578 lcore 0: 183787 00:06:11.578 lcore 1: 183786 00:06:11.578 lcore 2: 183785 00:06:11.578 lcore 3: 183786 00:06:11.578 done. 00:06:11.578 00:06:11.578 real 0m1.263s 00:06:11.578 user 0m4.070s 00:06:11.578 sys 0m0.074s 00:06:11.578 22:02:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.578 22:02:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.578 ************************************ 00:06:11.578 END TEST event_perf 00:06:11.578 ************************************ 00:06:11.578 22:02:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.578 22:02:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:11.578 22:02:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.578 22:02:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.578 ************************************ 00:06:11.578 START TEST event_reactor 00:06:11.578 ************************************ 00:06:11.578 22:02:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.578 [2024-12-16 22:02:17.619378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:11.578 [2024-12-16 22:02:17.619525] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72147 ] 00:06:11.578 [2024-12-16 22:02:17.779285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.578 [2024-12-16 22:02:17.810063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.520 test_start 00:06:12.520 oneshot 00:06:12.520 tick 100 00:06:12.520 tick 100 00:06:12.520 tick 250 00:06:12.520 tick 100 00:06:12.520 tick 100 00:06:12.520 tick 100 00:06:12.520 tick 250 00:06:12.520 tick 500 00:06:12.520 tick 100 00:06:12.520 tick 100 00:06:12.520 tick 250 00:06:12.520 tick 100 00:06:12.520 tick 100 00:06:12.520 test_end 00:06:12.781 00:06:12.781 real 0m1.277s 00:06:12.781 user 0m1.093s 00:06:12.781 sys 0m0.073s 00:06:12.781 22:02:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.781 ************************************ 00:06:12.781 END TEST event_reactor 00:06:12.781 ************************************ 00:06:12.781 22:02:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 22:02:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.781 22:02:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:12.781 22:02:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.781 22:02:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 ************************************ 00:06:12.781 START TEST event_reactor_perf 00:06:12.781 ************************************ 00:06:12.781 22:02:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.781 [2024-12-16 22:02:18.965054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:12.781 [2024-12-16 22:02:18.965191] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72183 ] 00:06:13.042 [2024-12-16 22:02:19.154923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.042 [2024-12-16 22:02:19.193758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.003 test_start 00:06:14.003 test_end 00:06:14.003 Performance: 288198 events per second 00:06:14.003 00:06:14.003 real 0m1.319s 00:06:14.003 user 0m1.122s 00:06:14.003 sys 0m0.086s 00:06:14.003 ************************************ 00:06:14.003 END TEST event_reactor_perf 00:06:14.003 ************************************ 00:06:14.003 22:02:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.003 22:02:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.003 22:02:20 event -- event/event.sh@49 -- # uname -s 00:06:14.003 22:02:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.003 22:02:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.003 22:02:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.003 22:02:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.003 22:02:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.003 ************************************ 00:06:14.003 START TEST event_scheduler 00:06:14.003 ************************************ 00:06:14.003 22:02:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.264 * Looking for test storage... 00:06:14.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.264 22:02:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.264 --rc genhtml_branch_coverage=1 00:06:14.264 --rc genhtml_function_coverage=1 00:06:14.264 --rc genhtml_legend=1 00:06:14.264 --rc geninfo_all_blocks=1 00:06:14.264 --rc geninfo_unexecuted_blocks=1 00:06:14.264 00:06:14.264 ' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.264 --rc genhtml_branch_coverage=1 00:06:14.264 --rc genhtml_function_coverage=1 00:06:14.264 --rc genhtml_legend=1 00:06:14.264 --rc geninfo_all_blocks=1 00:06:14.264 --rc geninfo_unexecuted_blocks=1 00:06:14.264 00:06:14.264 ' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.264 --rc genhtml_branch_coverage=1 00:06:14.264 --rc genhtml_function_coverage=1 00:06:14.264 --rc genhtml_legend=1 00:06:14.264 --rc geninfo_all_blocks=1 00:06:14.264 --rc geninfo_unexecuted_blocks=1 00:06:14.264 00:06:14.264 ' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.264 --rc genhtml_branch_coverage=1 00:06:14.264 --rc genhtml_function_coverage=1 00:06:14.264 --rc genhtml_legend=1 00:06:14.264 --rc geninfo_all_blocks=1 00:06:14.264 --rc geninfo_unexecuted_blocks=1 00:06:14.264 00:06:14.264 ' 00:06:14.264 22:02:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.264 22:02:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72254 00:06:14.264 22:02:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.264 22:02:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72254 00:06:14.264 22:02:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72254 ']' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.264 22:02:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.264 [2024-12-16 22:02:20.562932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:14.264 [2024-12-16 22:02:20.563095] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72254 ] 00:06:14.524 [2024-12-16 22:02:20.723779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.524 [2024-12-16 22:02:20.759053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.524 [2024-12-16 22:02:20.759712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.524 [2024-12-16 22:02:20.760087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.524 [2024-12-16 22:02:20.760163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:15.097 22:02:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.097 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.097 POWER: Cannot set governor of lcore 0 to performance 00:06:15.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.097 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.097 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.097 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.097 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.097 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.097 [2024-12-16 22:02:21.438143] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.097 [2024-12-16 22:02:21.438168] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:15.097 [2024-12-16 22:02:21.438178] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.097 [2024-12-16 22:02:21.438197] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.097 [2024-12-16 22:02:21.438205] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.097 [2024-12-16 22:02:21.438214] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.097 22:02:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.097 22:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.358 [2024-12-16 22:02:21.519138] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.358 22:02:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.358 22:02:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.358 22:02:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.358 22:02:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.358 22:02:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.358 ************************************ 00:06:15.358 START TEST scheduler_create_thread 00:06:15.358 ************************************ 00:06:15.358 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 2 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 3 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 4 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 5 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 6 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 7 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 8 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 9 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 10 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.359 22:02:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.743 22:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.743 22:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:16.743 22:02:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:16.743 22:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.743 22:02:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.682 22:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.682 22:02:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:17.682 22:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.682 22:02:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.622 22:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.622 22:02:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:18.622 22:02:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:18.622 22:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.622 22:02:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.198 22:02:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.198 00:06:19.198 real 0m3.884s 00:06:19.198 user 0m0.012s 00:06:19.198 sys 0m0.013s 00:06:19.198 22:02:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.198 22:02:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.198 ************************************ 00:06:19.198 END TEST scheduler_create_thread 00:06:19.198 ************************************ 00:06:19.198 22:02:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:19.198 22:02:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72254 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72254 ']' 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72254 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72254 00:06:19.198 killing process with pid 72254 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72254' 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72254 00:06:19.198 22:02:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72254 00:06:19.458 [2024-12-16 22:02:25.801199] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.719 ************************************ 00:06:19.719 END TEST event_scheduler 00:06:19.719 ************************************ 00:06:19.719 00:06:19.719 real 0m5.696s 00:06:19.719 user 0m12.001s 00:06:19.719 sys 0m0.388s 00:06:19.719 22:02:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.719 22:02:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.719 22:02:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.719 22:02:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.719 22:02:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.719 22:02:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.719 22:02:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.719 ************************************ 00:06:19.720 START TEST app_repeat 00:06:19.720 ************************************ 00:06:19.720 22:02:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.720 22:02:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72365 00:06:19.980 Process app_repeat pid: 72365 00:06:19.980 spdk_app_start Round 0 00:06:19.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72365' 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.980 22:02:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72365 /var/tmp/spdk-nbd.sock 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.980 22:02:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.980 [2024-12-16 22:02:26.096400] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:19.980 [2024-12-16 22:02:26.096504] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:06:19.980 [2024-12-16 22:02:26.254418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.980 [2024-12-16 22:02:26.273777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.980 [2024-12-16 22:02:26.273831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.922 22:02:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.922 22:02:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.922 22:02:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.922 Malloc0 00:06:20.922 22:02:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.183 Malloc1 00:06:21.183 22:02:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.183 22:02:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.445 /dev/nbd0 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.446 1+0 records in 00:06:21.446 1+0 records out 00:06:21.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017838 s, 23.0 MB/s 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.446 /dev/nbd1 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.446 1+0 records in 00:06:21.446 1+0 records out 00:06:21.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236178 s, 17.3 MB/s 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.446 22:02:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.446 22:02:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.709 22:02:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.709 { 00:06:21.709 "nbd_device": "/dev/nbd0", 00:06:21.709 "bdev_name": "Malloc0" 00:06:21.709 }, 00:06:21.709 { 00:06:21.709 "nbd_device": "/dev/nbd1", 00:06:21.709 "bdev_name": "Malloc1" 00:06:21.709 } 00:06:21.709 ]' 00:06:21.709 22:02:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.709 { 00:06:21.709 "nbd_device": "/dev/nbd0", 00:06:21.709 "bdev_name": "Malloc0" 00:06:21.709 }, 00:06:21.709 { 00:06:21.709 "nbd_device": "/dev/nbd1", 00:06:21.709 "bdev_name": "Malloc1" 00:06:21.709 } 00:06:21.709 ]' 00:06:21.709 22:02:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.709 /dev/nbd1' 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.709 /dev/nbd1' 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.709 256+0 records in 00:06:21.709 256+0 records out 00:06:21.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793441 s, 132 MB/s 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.709 256+0 records in 00:06:21.709 256+0 records out 00:06:21.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179265 s, 58.5 MB/s 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.709 22:02:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.052 256+0 records in 00:06:22.052 256+0 records out 00:06:22.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169433 s, 61.9 MB/s 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.052 22:02:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.312 22:02:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.570 22:02:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.570 22:02:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.827 22:02:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.827 [2024-12-16 22:02:29.032001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.827 [2024-12-16 22:02:29.048244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.827 [2024-12-16 22:02:29.048418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.827 [2024-12-16 22:02:29.077945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.827 [2024-12-16 22:02:29.078001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.108 spdk_app_start Round 1 00:06:26.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.108 22:02:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.108 22:02:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.108 22:02:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72365 /var/tmp/spdk-nbd.sock 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.108 22:02:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.108 22:02:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.108 22:02:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.108 22:02:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.108 Malloc0 00:06:26.108 22:02:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.365 Malloc1 00:06:26.365 22:02:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.365 22:02:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.623 /dev/nbd0 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.623 1+0 records in 00:06:26.623 1+0 records out 00:06:26.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220782 s, 18.6 MB/s 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.623 22:02:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.623 /dev/nbd1 00:06:26.623 22:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.881 1+0 records in 00:06:26.881 1+0 records out 00:06:26.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200357 s, 20.4 MB/s 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.881 22:02:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.881 22:02:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.881 22:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.881 { 00:06:26.882 "nbd_device": "/dev/nbd0", 00:06:26.882 "bdev_name": "Malloc0" 00:06:26.882 }, 00:06:26.882 { 00:06:26.882 "nbd_device": "/dev/nbd1", 00:06:26.882 "bdev_name": "Malloc1" 00:06:26.882 } 00:06:26.882 ]' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.882 { 00:06:26.882 "nbd_device": "/dev/nbd0", 00:06:26.882 "bdev_name": "Malloc0" 00:06:26.882 }, 00:06:26.882 { 00:06:26.882 "nbd_device": "/dev/nbd1", 00:06:26.882 "bdev_name": "Malloc1" 00:06:26.882 } 00:06:26.882 ]' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.882 /dev/nbd1' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.882 /dev/nbd1' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.882 22:02:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.139 256+0 records in 00:06:27.139 256+0 records out 00:06:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428509 s, 245 MB/s 00:06:27.139 22:02:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.139 22:02:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.139 256+0 records in 00:06:27.139 256+0 records out 00:06:27.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140196 s, 74.8 MB/s 00:06:27.139 22:02:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.139 22:02:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.139 256+0 records in 00:06:27.139 256+0 records out 00:06:27.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143799 s, 72.9 MB/s 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.140 22:02:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.397 22:02:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.398 22:02:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.398 22:02:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.398 22:02:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.398 22:02:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.398 22:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.655 22:02:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.655 22:02:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.913 22:02:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.913 [2024-12-16 22:02:34.212738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.913 [2024-12-16 22:02:34.228553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.913 [2024-12-16 22:02:34.228560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.913 [2024-12-16 22:02:34.257625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.914 [2024-12-16 22:02:34.257665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.194 spdk_app_start Round 2 00:06:31.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.194 22:02:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.194 22:02:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.194 22:02:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72365 /var/tmp/spdk-nbd.sock 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.194 22:02:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.194 22:02:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.452 Malloc0 00:06:31.452 22:02:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.452 Malloc1 00:06:31.453 22:02:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.453 22:02:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.711 /dev/nbd0 00:06:31.711 22:02:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.711 22:02:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.711 22:02:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.711 22:02:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.711 22:02:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.711 22:02:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.711 22:02:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.711 1+0 records in 00:06:31.711 1+0 records out 00:06:31.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370564 s, 11.1 MB/s 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.711 22:02:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.711 22:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.711 22:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.711 22:02:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.969 /dev/nbd1 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.969 1+0 records in 00:06:31.969 1+0 records out 00:06:31.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001483 s, 27.6 MB/s 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.969 22:02:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.969 22:02:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.228 { 00:06:32.228 "nbd_device": "/dev/nbd0", 00:06:32.228 "bdev_name": "Malloc0" 00:06:32.228 }, 00:06:32.228 { 00:06:32.228 "nbd_device": "/dev/nbd1", 00:06:32.228 "bdev_name": "Malloc1" 00:06:32.228 } 00:06:32.228 ]' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.228 { 00:06:32.228 "nbd_device": "/dev/nbd0", 00:06:32.228 "bdev_name": "Malloc0" 00:06:32.228 }, 00:06:32.228 { 00:06:32.228 "nbd_device": "/dev/nbd1", 00:06:32.228 "bdev_name": "Malloc1" 00:06:32.228 } 00:06:32.228 ]' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.228 /dev/nbd1' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.228 /dev/nbd1' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.228 256+0 records in 00:06:32.228 256+0 records out 00:06:32.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513679 s, 204 MB/s 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.228 256+0 records in 00:06:32.228 256+0 records out 00:06:32.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142057 s, 73.8 MB/s 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.228 256+0 records in 00:06:32.228 256+0 records out 00:06:32.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203133 s, 51.6 MB/s 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.228 22:02:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.486 22:02:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.744 22:02:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.002 22:02:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.002 22:02:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.260 22:02:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.260 [2024-12-16 22:02:39.496999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.260 [2024-12-16 22:02:39.513013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.260 [2024-12-16 22:02:39.513040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.260 [2024-12-16 22:02:39.541965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.260 [2024-12-16 22:02:39.542011] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.542 22:02:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72365 /var/tmp/spdk-nbd.sock 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72365 ']' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:36.542 22:02:42 event.app_repeat -- event/event.sh@39 -- # killprocess 72365 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72365 ']' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72365 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72365 00:06:36.542 killing process with pid 72365 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72365' 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72365 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72365 00:06:36.542 spdk_app_start is called in Round 0. 00:06:36.542 Shutdown signal received, stop current app iteration 00:06:36.542 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:36.542 spdk_app_start is called in Round 1. 00:06:36.542 Shutdown signal received, stop current app iteration 00:06:36.542 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:36.542 spdk_app_start is called in Round 2. 00:06:36.542 Shutdown signal received, stop current app iteration 00:06:36.542 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:36.542 spdk_app_start is called in Round 3. 00:06:36.542 Shutdown signal received, stop current app iteration 00:06:36.542 ************************************ 00:06:36.542 END TEST app_repeat 00:06:36.542 ************************************ 00:06:36.542 22:02:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.542 22:02:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.542 00:06:36.542 real 0m16.695s 00:06:36.542 user 0m37.415s 00:06:36.542 sys 0m1.904s 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.542 22:02:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.542 22:02:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.542 22:02:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.542 22:02:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.542 22:02:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.542 22:02:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.542 ************************************ 00:06:36.542 START TEST cpu_locks 00:06:36.542 ************************************ 00:06:36.542 22:02:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:36.542 * Looking for test storage... 00:06:36.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.542 22:02:42 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.542 22:02:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.542 22:02:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.801 22:02:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.801 22:02:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.802 22:02:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.802 --rc genhtml_branch_coverage=1 00:06:36.802 --rc genhtml_function_coverage=1 00:06:36.802 --rc genhtml_legend=1 00:06:36.802 --rc geninfo_all_blocks=1 00:06:36.802 --rc geninfo_unexecuted_blocks=1 00:06:36.802 00:06:36.802 ' 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.802 --rc genhtml_branch_coverage=1 00:06:36.802 --rc genhtml_function_coverage=1 00:06:36.802 --rc genhtml_legend=1 00:06:36.802 --rc geninfo_all_blocks=1 00:06:36.802 --rc geninfo_unexecuted_blocks=1 00:06:36.802 00:06:36.802 ' 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.802 --rc genhtml_branch_coverage=1 00:06:36.802 --rc genhtml_function_coverage=1 00:06:36.802 --rc genhtml_legend=1 00:06:36.802 --rc geninfo_all_blocks=1 00:06:36.802 --rc geninfo_unexecuted_blocks=1 00:06:36.802 00:06:36.802 ' 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.802 --rc genhtml_branch_coverage=1 00:06:36.802 --rc genhtml_function_coverage=1 00:06:36.802 --rc genhtml_legend=1 00:06:36.802 --rc geninfo_all_blocks=1 00:06:36.802 --rc geninfo_unexecuted_blocks=1 00:06:36.802 00:06:36.802 ' 00:06:36.802 22:02:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.802 22:02:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.802 22:02:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.802 22:02:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.802 22:02:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.802 ************************************ 00:06:36.802 START TEST default_locks 00:06:36.802 ************************************ 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72785 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72785 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72785 ']' 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.802 22:02:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.802 [2024-12-16 22:02:43.034613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:36.802 [2024-12-16 22:02:43.034876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72785 ] 00:06:37.061 [2024-12-16 22:02:43.190120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.061 [2024-12-16 22:02:43.207412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.632 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.632 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:37.632 22:02:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72785 00:06:37.632 22:02:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72785 00:06:37.632 22:02:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72785 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72785 ']' 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72785 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.633 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72785 00:06:37.894 killing process with pid 72785 00:06:37.894 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.894 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.894 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72785' 00:06:37.894 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72785 00:06:37.894 22:02:43 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72785 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72785 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72785 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72785 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72785 ']' 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.894 ERROR: process (pid: 72785) is no longer running 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.894 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72785) - No such process 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.894 00:06:37.894 real 0m1.262s 00:06:37.894 user 0m1.242s 00:06:37.894 sys 0m0.401s 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.894 22:02:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.894 ************************************ 00:06:37.894 END TEST default_locks 00:06:37.894 ************************************ 00:06:38.156 22:02:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.156 22:02:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.156 22:02:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.156 22:02:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.156 ************************************ 00:06:38.156 START TEST default_locks_via_rpc 00:06:38.156 ************************************ 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72827 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72827 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72827 ']' 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.156 22:02:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.156 [2024-12-16 22:02:44.352401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:38.156 [2024-12-16 22:02:44.352712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:06:38.417 [2024-12-16 22:02:44.507033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.417 [2024-12-16 22:02:44.525090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72827 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72827 00:06:38.986 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72827 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72827 ']' 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72827 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72827 00:06:39.245 killing process with pid 72827 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.245 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72827' 00:06:39.246 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72827 00:06:39.246 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72827 00:06:39.514 ************************************ 00:06:39.514 END TEST default_locks_via_rpc 00:06:39.514 00:06:39.514 real 0m1.350s 00:06:39.514 user 0m1.406s 00:06:39.514 sys 0m0.377s 00:06:39.514 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.514 22:02:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.514 ************************************ 00:06:39.514 22:02:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.514 22:02:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.514 22:02:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.514 22:02:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.514 ************************************ 00:06:39.514 START TEST non_locking_app_on_locked_coremask 00:06:39.514 ************************************ 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72873 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72873 /var/tmp/spdk.sock 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72873 ']' 00:06:39.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.514 22:02:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.514 [2024-12-16 22:02:45.745804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:39.514 [2024-12-16 22:02:45.746036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72873 ] 00:06:39.776 [2024-12-16 22:02:45.899266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.776 [2024-12-16 22:02:45.923368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72884 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72884 /var/tmp/spdk2.sock 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72884 ']' 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.348 22:02:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.348 [2024-12-16 22:02:46.680962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:40.348 [2024-12-16 22:02:46.681108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72884 ] 00:06:40.609 [2024-12-16 22:02:46.857622] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.609 [2024-12-16 22:02:46.857689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.609 [2024-12-16 22:02:46.919981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72873 ']' 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72873 00:06:41.549 killing process with pid 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72873' 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72873 00:06:41.549 22:02:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72873 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72884 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72884 ']' 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72884 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72884 00:06:42.492 killing process with pid 72884 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72884' 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72884 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72884 00:06:42.492 00:06:42.492 real 0m3.101s 00:06:42.492 user 0m3.352s 00:06:42.492 sys 0m0.863s 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.492 22:02:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.492 ************************************ 00:06:42.492 END TEST non_locking_app_on_locked_coremask 00:06:42.492 ************************************ 00:06:42.492 22:02:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.492 22:02:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.492 22:02:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.492 22:02:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.492 ************************************ 00:06:42.492 START TEST locking_app_on_unlocked_coremask 00:06:42.492 ************************************ 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:42.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72953 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72953 /var/tmp/spdk.sock 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72953 ']' 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.492 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.493 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.493 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.493 22:02:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.753 [2024-12-16 22:02:48.899498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:42.753 [2024-12-16 22:02:48.899598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72953 ] 00:06:42.753 [2024-12-16 22:02:49.047274] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.753 [2024-12-16 22:02:49.047491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.753 [2024-12-16 22:02:49.066442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72958 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72958 /var/tmp/spdk2.sock 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72958 ']' 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.694 22:02:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.694 [2024-12-16 22:02:49.802313] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:43.694 [2024-12-16 22:02:49.802546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:06:43.694 [2024-12-16 22:02:49.965476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.694 [2024-12-16 22:02:50.002618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72958 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72958 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72953 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72953 ']' 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72953 00:06:44.635 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.895 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.895 22:02:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72953 00:06:44.895 killing process with pid 72953 00:06:44.895 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.895 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.895 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72953' 00:06:44.895 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72953 00:06:44.895 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72953 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72958 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72958 ']' 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72958 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72958 00:06:45.156 killing process with pid 72958 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72958' 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72958 00:06:45.156 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72958 00:06:45.417 ************************************ 00:06:45.417 END TEST locking_app_on_unlocked_coremask 00:06:45.417 ************************************ 00:06:45.417 00:06:45.417 real 0m2.858s 00:06:45.417 user 0m3.215s 00:06:45.417 sys 0m0.723s 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.417 22:02:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.417 22:02:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.417 22:02:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.417 22:02:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.417 ************************************ 00:06:45.417 START TEST locking_app_on_locked_coremask 00:06:45.417 ************************************ 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73016 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73016 /var/tmp/spdk.sock 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73016 ']' 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.417 22:02:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.678 [2024-12-16 22:02:51.809416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:45.678 [2024-12-16 22:02:51.809539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73016 ] 00:06:45.678 [2024-12-16 22:02:51.962412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.678 [2024-12-16 22:02:51.979351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.619 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.619 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.619 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73032 00:06:46.619 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73032 /var/tmp/spdk2.sock 00:06:46.619 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73032 /var/tmp/spdk2.sock 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73032 /var/tmp/spdk2.sock 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73032 ']' 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.620 22:02:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.620 [2024-12-16 22:02:52.709855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:46.620 [2024-12-16 22:02:52.710128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73032 ] 00:06:46.620 [2024-12-16 22:02:52.870035] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73016 has claimed it. 00:06:46.620 [2024-12-16 22:02:52.870086] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.281 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73032) - No such process 00:06:47.281 ERROR: process (pid: 73032) is no longer running 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73016 ']' 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73016 00:06:47.281 killing process with pid 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73016' 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73016 00:06:47.281 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73016 00:06:47.540 00:06:47.540 real 0m2.072s 00:06:47.540 user 0m2.328s 00:06:47.540 sys 0m0.494s 00:06:47.540 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.540 22:02:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.540 ************************************ 00:06:47.540 END TEST locking_app_on_locked_coremask 00:06:47.540 ************************************ 00:06:47.540 22:02:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.540 22:02:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.540 22:02:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.540 22:02:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.540 ************************************ 00:06:47.540 START TEST locking_overlapped_coremask 00:06:47.540 ************************************ 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:47.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73074 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73074 /var/tmp/spdk.sock 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73074 ']' 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.540 22:02:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.541 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.541 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.541 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.541 22:02:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.800 [2024-12-16 22:02:53.943585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:47.800 [2024-12-16 22:02:53.943727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73074 ] 00:06:47.800 [2024-12-16 22:02:54.107641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.800 [2024-12-16 22:02:54.127972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.800 [2024-12-16 22:02:54.128201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.800 [2024-12-16 22:02:54.128281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73092 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73092 /var/tmp/spdk2.sock 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73092 /var/tmp/spdk2.sock 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73092 /var/tmp/spdk2.sock 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73092 ']' 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.743 22:02:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.743 [2024-12-16 22:02:54.839435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:48.743 [2024-12-16 22:02:54.839719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73092 ] 00:06:48.743 [2024-12-16 22:02:55.010633] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73074 has claimed it. 00:06:48.743 [2024-12-16 22:02:55.010692] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.314 ERROR: process (pid: 73092) is no longer running 00:06:49.314 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73092) - No such process 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.314 22:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73074 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 73074 ']' 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 73074 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73074 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73074' 00:06:49.315 killing process with pid 73074 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 73074 00:06:49.315 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 73074 00:06:49.574 00:06:49.574 real 0m1.893s 00:06:49.574 user 0m5.208s 00:06:49.574 sys 0m0.403s 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.574 ************************************ 00:06:49.574 END TEST locking_overlapped_coremask 00:06:49.574 ************************************ 00:06:49.574 22:02:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.574 22:02:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.574 22:02:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.574 22:02:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.574 ************************************ 00:06:49.574 START TEST locking_overlapped_coremask_via_rpc 00:06:49.574 ************************************ 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73134 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73134 /var/tmp/spdk.sock 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73134 ']' 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.574 22:02:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.574 [2024-12-16 22:02:55.883653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:49.574 [2024-12-16 22:02:55.883774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73134 ] 00:06:49.835 [2024-12-16 22:02:56.034126] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.835 [2024-12-16 22:02:56.034170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.835 [2024-12-16 22:02:56.054526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.835 [2024-12-16 22:02:56.054809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.835 [2024-12-16 22:02:56.054823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73152 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73152 /var/tmp/spdk2.sock 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73152 ']' 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.408 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.409 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.409 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.409 22:02:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.668 [2024-12-16 22:02:56.790786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:50.668 [2024-12-16 22:02:56.790929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73152 ] 00:06:50.668 [2024-12-16 22:02:56.965317] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.668 [2024-12-16 22:02:56.965374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.928 [2024-12-16 22:02:57.029115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.928 [2024-12-16 22:02:57.031926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.928 [2024-12-16 22:02:57.031983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.500 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.501 [2024-12-16 22:02:57.641971] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73134 has claimed it. 00:06:51.501 request: 00:06:51.501 { 00:06:51.501 "method": "framework_enable_cpumask_locks", 00:06:51.501 "req_id": 1 00:06:51.501 } 00:06:51.501 Got JSON-RPC error response 00:06:51.501 response: 00:06:51.501 { 00:06:51.501 "code": -32603, 00:06:51.501 "message": "Failed to claim CPU core: 2" 00:06:51.501 } 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73134 /var/tmp/spdk.sock 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73134 ']' 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73152 /var/tmp/spdk2.sock 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73152 ']' 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.501 22:02:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.762 00:06:51.762 real 0m2.249s 00:06:51.762 user 0m1.064s 00:06:51.762 sys 0m0.119s 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.762 22:02:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.762 ************************************ 00:06:51.762 END TEST locking_overlapped_coremask_via_rpc 00:06:51.762 ************************************ 00:06:51.762 22:02:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.762 22:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73134 ]] 00:06:51.762 22:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73134 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73134 ']' 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73134 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73134 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.762 killing process with pid 73134 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73134' 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73134 00:06:51.762 22:02:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73134 00:06:52.023 22:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73152 ]] 00:06:52.023 22:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73152 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73152 ']' 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73152 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73152 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:52.023 killing process with pid 73152 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73152' 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73152 00:06:52.023 22:02:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73152 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73134 ]] 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73134 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73134 ']' 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73134 00:06:52.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73134) - No such process 00:06:52.285 Process with pid 73134 is not found 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73134 is not found' 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73152 ]] 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73152 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73152 ']' 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73152 00:06:52.285 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73152) - No such process 00:06:52.285 Process with pid 73152 is not found 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73152 is not found' 00:06:52.285 22:02:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.285 ************************************ 00:06:52.285 END TEST cpu_locks 00:06:52.285 ************************************ 00:06:52.285 00:06:52.285 real 0m15.768s 00:06:52.285 user 0m27.576s 00:06:52.285 sys 0m4.179s 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.285 22:02:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.285 00:06:52.285 real 0m42.497s 00:06:52.285 user 1m23.451s 00:06:52.285 sys 0m6.918s 00:06:52.285 22:02:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.285 ************************************ 00:06:52.285 END TEST event 00:06:52.285 ************************************ 00:06:52.285 22:02:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.546 22:02:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.546 22:02:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.546 22:02:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.546 22:02:58 -- common/autotest_common.sh@10 -- # set +x 00:06:52.546 ************************************ 00:06:52.546 START TEST thread 00:06:52.546 ************************************ 00:06:52.546 22:02:58 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.546 * Looking for test storage... 00:06:52.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.546 22:02:58 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:52.546 22:02:58 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:52.546 22:02:58 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:52.546 22:02:58 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:52.546 22:02:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.546 22:02:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.546 22:02:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.546 22:02:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.546 22:02:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.546 22:02:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.547 22:02:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.547 22:02:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.547 22:02:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.547 22:02:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.547 22:02:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.547 22:02:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.547 22:02:58 thread -- scripts/common.sh@345 -- # : 1 00:06:52.547 22:02:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.547 22:02:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.547 22:02:58 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.547 22:02:58 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.547 22:02:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.547 22:02:58 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.547 22:02:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.547 22:02:58 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.547 22:02:58 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.547 22:02:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.547 22:02:58 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.547 22:02:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.547 22:02:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.547 22:02:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.547 22:02:58 thread -- scripts/common.sh@368 -- # return 0 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:52.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.547 --rc genhtml_branch_coverage=1 00:06:52.547 --rc genhtml_function_coverage=1 00:06:52.547 --rc genhtml_legend=1 00:06:52.547 --rc geninfo_all_blocks=1 00:06:52.547 --rc geninfo_unexecuted_blocks=1 00:06:52.547 00:06:52.547 ' 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:52.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.547 --rc genhtml_branch_coverage=1 00:06:52.547 --rc genhtml_function_coverage=1 00:06:52.547 --rc genhtml_legend=1 00:06:52.547 --rc geninfo_all_blocks=1 00:06:52.547 --rc geninfo_unexecuted_blocks=1 00:06:52.547 00:06:52.547 ' 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:52.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.547 --rc genhtml_branch_coverage=1 00:06:52.547 --rc genhtml_function_coverage=1 00:06:52.547 --rc genhtml_legend=1 00:06:52.547 --rc geninfo_all_blocks=1 00:06:52.547 --rc geninfo_unexecuted_blocks=1 00:06:52.547 00:06:52.547 ' 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:52.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.547 --rc genhtml_branch_coverage=1 00:06:52.547 --rc genhtml_function_coverage=1 00:06:52.547 --rc genhtml_legend=1 00:06:52.547 --rc geninfo_all_blocks=1 00:06:52.547 --rc geninfo_unexecuted_blocks=1 00:06:52.547 00:06:52.547 ' 00:06:52.547 22:02:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.547 22:02:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.547 ************************************ 00:06:52.547 START TEST thread_poller_perf 00:06:52.547 ************************************ 00:06:52.547 22:02:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.547 [2024-12-16 22:02:58.837739] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:52.547 [2024-12-16 22:02:58.837894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73279 ] 00:06:52.809 [2024-12-16 22:02:58.993786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.809 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.809 [2024-12-16 22:02:59.016752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.748 [2024-12-16T22:03:00.095Z] ====================================== 00:06:53.748 [2024-12-16T22:03:00.095Z] busy:2609476216 (cyc) 00:06:53.748 [2024-12-16T22:03:00.095Z] total_run_count: 413000 00:06:53.748 [2024-12-16T22:03:00.095Z] tsc_hz: 2600000000 (cyc) 00:06:53.748 [2024-12-16T22:03:00.095Z] ====================================== 00:06:53.748 [2024-12-16T22:03:00.095Z] poller_cost: 6318 (cyc), 2430 (nsec) 00:06:53.748 00:06:53.748 real 0m1.256s 00:06:53.748 user 0m1.071s 00:06:53.748 sys 0m0.079s 00:06:53.748 22:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.748 ************************************ 00:06:53.748 END TEST thread_poller_perf 00:06:53.748 ************************************ 00:06:53.748 22:03:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.009 22:03:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.009 22:03:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:54.009 22:03:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.009 22:03:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.009 ************************************ 00:06:54.009 START TEST thread_poller_perf 00:06:54.009 ************************************ 00:06:54.009 22:03:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.009 [2024-12-16 22:03:00.147898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:54.009 [2024-12-16 22:03:00.147988] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73316 ] 00:06:54.009 [2024-12-16 22:03:00.295075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.009 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.009 [2024-12-16 22:03:00.311635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.022 [2024-12-16T22:03:01.369Z] ====================================== 00:06:55.022 [2024-12-16T22:03:01.369Z] busy:2603087446 (cyc) 00:06:55.022 [2024-12-16T22:03:01.369Z] total_run_count: 4786000 00:06:55.022 [2024-12-16T22:03:01.369Z] tsc_hz: 2600000000 (cyc) 00:06:55.022 [2024-12-16T22:03:01.369Z] ====================================== 00:06:55.022 [2024-12-16T22:03:01.369Z] poller_cost: 543 (cyc), 208 (nsec) 00:06:55.022 ************************************ 00:06:55.022 END TEST thread_poller_perf 00:06:55.022 ************************************ 00:06:55.022 00:06:55.022 real 0m1.223s 00:06:55.022 user 0m1.063s 00:06:55.022 sys 0m0.055s 00:06:55.022 22:03:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.022 22:03:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.284 22:03:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.284 ************************************ 00:06:55.284 END TEST thread 00:06:55.284 ************************************ 00:06:55.284 00:06:55.284 real 0m2.721s 00:06:55.284 user 0m2.258s 00:06:55.284 sys 0m0.237s 00:06:55.284 22:03:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.284 22:03:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.284 22:03:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.284 22:03:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.284 22:03:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.284 22:03:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.284 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:06:55.284 ************************************ 00:06:55.284 START TEST app_cmdline 00:06:55.284 ************************************ 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.284 * Looking for test storage... 00:06:55.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.284 22:03:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.284 --rc genhtml_branch_coverage=1 00:06:55.284 --rc genhtml_function_coverage=1 00:06:55.284 --rc genhtml_legend=1 00:06:55.284 --rc geninfo_all_blocks=1 00:06:55.284 --rc geninfo_unexecuted_blocks=1 00:06:55.284 00:06:55.284 ' 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.284 --rc genhtml_branch_coverage=1 00:06:55.284 --rc genhtml_function_coverage=1 00:06:55.284 --rc genhtml_legend=1 00:06:55.284 --rc geninfo_all_blocks=1 00:06:55.284 --rc geninfo_unexecuted_blocks=1 00:06:55.284 00:06:55.284 ' 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.284 --rc genhtml_branch_coverage=1 00:06:55.284 --rc genhtml_function_coverage=1 00:06:55.284 --rc genhtml_legend=1 00:06:55.284 --rc geninfo_all_blocks=1 00:06:55.284 --rc geninfo_unexecuted_blocks=1 00:06:55.284 00:06:55.284 ' 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.284 --rc genhtml_branch_coverage=1 00:06:55.284 --rc genhtml_function_coverage=1 00:06:55.284 --rc genhtml_legend=1 00:06:55.284 --rc geninfo_all_blocks=1 00:06:55.284 --rc geninfo_unexecuted_blocks=1 00:06:55.284 00:06:55.284 ' 00:06:55.284 22:03:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.284 22:03:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73399 00:06:55.284 22:03:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73399 00:06:55.284 22:03:01 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73399 ']' 00:06:55.284 22:03:01 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.285 22:03:01 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.285 22:03:01 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.285 22:03:01 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.285 22:03:01 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.285 22:03:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.285 [2024-12-16 22:03:01.607904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:55.285 [2024-12-16 22:03:01.607992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73399 ] 00:06:55.545 [2024-12-16 22:03:01.760787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.545 [2024-12-16 22:03:01.779200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.116 22:03:02 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.116 22:03:02 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:56.116 22:03:02 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.377 { 00:06:56.377 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:56.377 "fields": { 00:06:56.377 "major": 25, 00:06:56.377 "minor": 1, 00:06:56.377 "patch": 0, 00:06:56.377 "suffix": "-pre", 00:06:56.377 "commit": "e01cb43b8" 00:06:56.377 } 00:06:56.377 } 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.377 22:03:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.377 22:03:02 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.638 request: 00:06:56.638 { 00:06:56.638 "method": "env_dpdk_get_mem_stats", 00:06:56.638 "req_id": 1 00:06:56.638 } 00:06:56.638 Got JSON-RPC error response 00:06:56.638 response: 00:06:56.638 { 00:06:56.638 "code": -32601, 00:06:56.638 "message": "Method not found" 00:06:56.638 } 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.638 22:03:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73399 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73399 ']' 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73399 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73399 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.638 killing process with pid 73399 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73399' 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 73399 00:06:56.638 22:03:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 73399 00:06:56.899 00:06:56.899 real 0m1.811s 00:06:56.899 user 0m2.115s 00:06:56.899 sys 0m0.384s 00:06:56.899 22:03:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.899 ************************************ 00:06:56.899 END TEST app_cmdline 00:06:56.899 ************************************ 00:06:56.899 22:03:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.159 22:03:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.159 22:03:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.159 22:03:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.159 22:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.159 ************************************ 00:06:57.159 START TEST version 00:06:57.159 ************************************ 00:06:57.159 22:03:03 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:57.159 * Looking for test storage... 00:06:57.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.160 22:03:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.160 22:03:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.160 22:03:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.160 22:03:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.160 22:03:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.160 22:03:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.160 22:03:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.160 22:03:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.160 22:03:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.160 22:03:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.160 22:03:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.160 22:03:03 version -- scripts/common.sh@344 -- # case "$op" in 00:06:57.160 22:03:03 version -- scripts/common.sh@345 -- # : 1 00:06:57.160 22:03:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.160 22:03:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.160 22:03:03 version -- scripts/common.sh@365 -- # decimal 1 00:06:57.160 22:03:03 version -- scripts/common.sh@353 -- # local d=1 00:06:57.160 22:03:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.160 22:03:03 version -- scripts/common.sh@355 -- # echo 1 00:06:57.160 22:03:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.160 22:03:03 version -- scripts/common.sh@366 -- # decimal 2 00:06:57.160 22:03:03 version -- scripts/common.sh@353 -- # local d=2 00:06:57.160 22:03:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.160 22:03:03 version -- scripts/common.sh@355 -- # echo 2 00:06:57.160 22:03:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.160 22:03:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.160 22:03:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.160 22:03:03 version -- scripts/common.sh@368 -- # return 0 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.160 --rc genhtml_branch_coverage=1 00:06:57.160 --rc genhtml_function_coverage=1 00:06:57.160 --rc genhtml_legend=1 00:06:57.160 --rc geninfo_all_blocks=1 00:06:57.160 --rc geninfo_unexecuted_blocks=1 00:06:57.160 00:06:57.160 ' 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.160 --rc genhtml_branch_coverage=1 00:06:57.160 --rc genhtml_function_coverage=1 00:06:57.160 --rc genhtml_legend=1 00:06:57.160 --rc geninfo_all_blocks=1 00:06:57.160 --rc geninfo_unexecuted_blocks=1 00:06:57.160 00:06:57.160 ' 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.160 --rc genhtml_branch_coverage=1 00:06:57.160 --rc genhtml_function_coverage=1 00:06:57.160 --rc genhtml_legend=1 00:06:57.160 --rc geninfo_all_blocks=1 00:06:57.160 --rc geninfo_unexecuted_blocks=1 00:06:57.160 00:06:57.160 ' 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.160 --rc genhtml_branch_coverage=1 00:06:57.160 --rc genhtml_function_coverage=1 00:06:57.160 --rc genhtml_legend=1 00:06:57.160 --rc geninfo_all_blocks=1 00:06:57.160 --rc geninfo_unexecuted_blocks=1 00:06:57.160 00:06:57.160 ' 00:06:57.160 22:03:03 version -- app/version.sh@17 -- # get_header_version major 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.160 22:03:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # cut -f2 00:06:57.160 22:03:03 version -- app/version.sh@17 -- # major=25 00:06:57.160 22:03:03 version -- app/version.sh@18 -- # get_header_version minor 00:06:57.160 22:03:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # cut -f2 00:06:57.160 22:03:03 version -- app/version.sh@18 -- # minor=1 00:06:57.160 22:03:03 version -- app/version.sh@19 -- # get_header_version patch 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.160 22:03:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # cut -f2 00:06:57.160 22:03:03 version -- app/version.sh@19 -- # patch=0 00:06:57.160 22:03:03 version -- app/version.sh@20 -- # get_header_version suffix 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # cut -f2 00:06:57.160 22:03:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:57.160 22:03:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:57.160 22:03:03 version -- app/version.sh@20 -- # suffix=-pre 00:06:57.160 22:03:03 version -- app/version.sh@22 -- # version=25.1 00:06:57.160 22:03:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:57.160 22:03:03 version -- app/version.sh@28 -- # version=25.1rc0 00:06:57.160 22:03:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:57.160 22:03:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:57.160 22:03:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:57.160 22:03:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:57.160 00:06:57.160 real 0m0.186s 00:06:57.160 user 0m0.118s 00:06:57.160 sys 0m0.093s 00:06:57.160 ************************************ 00:06:57.160 END TEST version 00:06:57.160 ************************************ 00:06:57.160 22:03:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.160 22:03:03 version -- common/autotest_common.sh@10 -- # set +x 00:06:57.160 22:03:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:57.160 22:03:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:57.160 22:03:03 -- spdk/autotest.sh@194 -- # uname -s 00:06:57.160 22:03:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:57.160 22:03:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.160 22:03:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:57.160 22:03:03 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:57.160 22:03:03 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.160 22:03:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.160 22:03:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.160 22:03:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.160 ************************************ 00:06:57.160 START TEST blockdev_nvme 00:06:57.160 ************************************ 00:06:57.160 22:03:03 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:57.422 * Looking for test storage... 00:06:57.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.422 22:03:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.422 --rc genhtml_branch_coverage=1 00:06:57.422 --rc genhtml_function_coverage=1 00:06:57.422 --rc genhtml_legend=1 00:06:57.422 --rc geninfo_all_blocks=1 00:06:57.422 --rc geninfo_unexecuted_blocks=1 00:06:57.422 00:06:57.422 ' 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.422 --rc genhtml_branch_coverage=1 00:06:57.422 --rc genhtml_function_coverage=1 00:06:57.422 --rc genhtml_legend=1 00:06:57.422 --rc geninfo_all_blocks=1 00:06:57.422 --rc geninfo_unexecuted_blocks=1 00:06:57.422 00:06:57.422 ' 00:06:57.422 22:03:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.422 --rc genhtml_branch_coverage=1 00:06:57.422 --rc genhtml_function_coverage=1 00:06:57.422 --rc genhtml_legend=1 00:06:57.423 --rc geninfo_all_blocks=1 00:06:57.423 --rc geninfo_unexecuted_blocks=1 00:06:57.423 00:06:57.423 ' 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.423 --rc genhtml_branch_coverage=1 00:06:57.423 --rc genhtml_function_coverage=1 00:06:57.423 --rc genhtml_legend=1 00:06:57.423 --rc geninfo_all_blocks=1 00:06:57.423 --rc geninfo_unexecuted_blocks=1 00:06:57.423 00:06:57.423 ' 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:57.423 22:03:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:57.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73560 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73560 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73560 ']' 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.423 22:03:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:57.423 22:03:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.423 [2024-12-16 22:03:03.706700] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:57.423 [2024-12-16 22:03:03.706814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73560 ] 00:06:57.683 [2024-12-16 22:03:03.862992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.683 [2024-12-16 22:03:03.881321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.253 22:03:04 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.253 22:03:04 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:58.253 22:03:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:58.253 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.253 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:58.823 22:03:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:58.823 22:03:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:58.824 22:03:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ba5b7682-1549-45c9-9f1e-e769bf7b73c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ba5b7682-1549-45c9-9f1e-e769bf7b73c0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "18fba0a7-e4de-48d7-9fdb-de0fb93cee0a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "18fba0a7-e4de-48d7-9fdb-de0fb93cee0a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "25565b3f-b0d2-43ce-9611-51b6d69425ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "25565b3f-b0d2-43ce-9611-51b6d69425ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "469a2f39-6d68-4d96-a3f5-714901f5053e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "469a2f39-6d68-4d96-a3f5-714901f5053e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e24fcafa-cabf-4138-8f68-fd8ba6643ef1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e24fcafa-cabf-4138-8f68-fd8ba6643ef1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ae0262bc-6810-4a03-8048-9b5516242a79"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ae0262bc-6810-4a03-8048-9b5516242a79",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:58.824 22:03:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:58.824 22:03:05 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:58.824 22:03:05 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:58.824 22:03:05 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 73560 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73560 ']' 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73560 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73560 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.824 killing process with pid 73560 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73560' 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73560 00:06:58.824 22:03:05 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73560 00:06:59.084 22:03:05 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.084 22:03:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.084 22:03:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:59.084 22:03:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.084 22:03:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.084 ************************************ 00:06:59.084 START TEST bdev_hello_world 00:06:59.084 ************************************ 00:06:59.084 22:03:05 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.084 [2024-12-16 22:03:05.362014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:59.084 [2024-12-16 22:03:05.362290] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73632 ] 00:06:59.344 [2024-12-16 22:03:05.516663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.344 [2024-12-16 22:03:05.535345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.604 [2024-12-16 22:03:05.905713] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:59.604 [2024-12-16 22:03:05.905762] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:59.604 [2024-12-16 22:03:05.905786] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:59.604 [2024-12-16 22:03:05.907858] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:59.605 [2024-12-16 22:03:05.908281] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:59.605 [2024-12-16 22:03:05.908311] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:59.605 [2024-12-16 22:03:05.908515] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:59.605 00:06:59.605 [2024-12-16 22:03:05.908536] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.865 00:06:59.865 real 0m0.743s 00:06:59.865 user 0m0.502s 00:06:59.865 sys 0m0.138s 00:06:59.865 22:03:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.865 22:03:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.865 ************************************ 00:06:59.865 END TEST bdev_hello_world 00:06:59.865 ************************************ 00:06:59.865 22:03:06 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.865 22:03:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.865 22:03:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.865 22:03:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.865 ************************************ 00:06:59.865 START TEST bdev_bounds 00:06:59.865 ************************************ 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73653 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.865 Process bdevio pid: 73653 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73653' 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73653 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73653 ']' 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.865 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.865 [2024-12-16 22:03:06.147360] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:59.865 [2024-12-16 22:03:06.147469] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73653 ] 00:07:00.125 [2024-12-16 22:03:06.301484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.125 [2024-12-16 22:03:06.322537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.125 [2024-12-16 22:03:06.323076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.125 [2024-12-16 22:03:06.323166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.691 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.691 22:03:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:00.691 22:03:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.949 I/O targets: 00:07:00.949 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.949 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:00.949 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.949 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.949 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.949 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.949 00:07:00.949 00:07:00.949 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.949 http://cunit.sourceforge.net/ 00:07:00.949 00:07:00.949 00:07:00.949 Suite: bdevio tests on: Nvme3n1 00:07:00.949 Test: blockdev write read block ...passed 00:07:00.949 Test: blockdev write zeroes read block ...passed 00:07:00.949 Test: blockdev write zeroes read no split ...passed 00:07:00.949 Test: blockdev write zeroes read split ...passed 00:07:00.949 Test: blockdev write zeroes read split partial ...passed 00:07:00.949 Test: blockdev reset ...[2024-12-16 22:03:07.087101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:00.949 passed 00:07:00.949 Test: blockdev write read 8 blocks ...[2024-12-16 22:03:07.088754] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:00.949 passed 00:07:00.949 Test: blockdev write read size > 128k ...passed 00:07:00.949 Test: blockdev write read invalid size ...passed 00:07:00.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.949 Test: blockdev write read max offset ...passed 00:07:00.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.949 Test: blockdev writev readv 8 blocks ...passed 00:07:00.949 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.949 Test: blockdev writev readv block ...passed 00:07:00.949 Test: blockdev writev readv size > 128k ...passed 00:07:00.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.949 Test: blockdev comparev and writev ...[2024-12-16 22:03:07.092636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b180e000 len:0x1000 00:07:00.949 [2024-12-16 22:03:07.092682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.949 passed 00:07:00.949 Test: blockdev nvme passthru rw ...passed 00:07:00.949 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.949 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:07.093205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.949 [2024-12-16 22:03:07.093237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.949 passed 00:07:00.949 Test: blockdev copy ...passed 00:07:00.949 Suite: bdevio tests on: Nvme2n3 00:07:00.949 Test: blockdev write read block ...passed 00:07:00.949 Test: blockdev write zeroes read block ...passed 00:07:00.949 Test: blockdev write zeroes read no split ...passed 00:07:00.950 Test: blockdev write zeroes read split ...passed 00:07:00.950 Test: blockdev write zeroes read split partial ...passed 00:07:00.950 Test: blockdev reset ...[2024-12-16 22:03:07.111442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.950 passed 00:07:00.950 Test: blockdev write read 8 blocks ...[2024-12-16 22:03:07.113334] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.950 passed 00:07:00.950 Test: blockdev write read size > 128k ...passed 00:07:00.950 Test: blockdev write read invalid size ...passed 00:07:00.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.950 Test: blockdev write read max offset ...passed 00:07:00.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.950 Test: blockdev writev readv 8 blocks ...passed 00:07:00.950 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.950 Test: blockdev writev readv block ...passed 00:07:00.950 Test: blockdev writev readv size > 128k ...passed 00:07:00.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.950 Test: blockdev comparev and writev ...[2024-12-16 22:03:07.118705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:00.950 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b1806000 len:0x1000 00:07:00.950 [2024-12-16 22:03:07.118832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.950 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:07.120066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.950 [2024-12-16 22:03:07.120148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev copy ...passed 00:07:00.950 Suite: bdevio tests on: Nvme2n2 00:07:00.950 Test: blockdev write read block ...passed 00:07:00.950 Test: blockdev write zeroes read block ...passed 00:07:00.950 Test: blockdev write zeroes read no split ...passed 00:07:00.950 Test: blockdev write zeroes read split ...passed 00:07:00.950 Test: blockdev write zeroes read split partial ...passed 00:07:00.950 Test: blockdev reset ...[2024-12-16 22:03:07.136463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.950 passed 00:07:00.950 Test: blockdev write read 8 blocks ...[2024-12-16 22:03:07.140268] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.950 passed 00:07:00.950 Test: blockdev write read size > 128k ...passed 00:07:00.950 Test: blockdev write read invalid size ...passed 00:07:00.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.950 Test: blockdev write read max offset ...passed 00:07:00.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.950 Test: blockdev writev readv 8 blocks ...passed 00:07:00.950 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.950 Test: blockdev writev readv block ...passed 00:07:00.950 Test: blockdev writev readv size > 128k ...passed 00:07:00.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.950 Test: blockdev comparev and writev ...[2024-12-16 22:03:07.144628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1808000 len:0x1000 00:07:00.950 [2024-12-16 22:03:07.144675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme passthru rw ...passed 00:07:00.950 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:07.145215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:00.950 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:00.950 [2024-12-16 22:03:07.145324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev copy ...passed 00:07:00.950 Suite: bdevio tests on: Nvme2n1 00:07:00.950 Test: blockdev write read block ...passed 00:07:00.950 Test: blockdev write zeroes read block ...passed 00:07:00.950 Test: blockdev write zeroes read no split ...passed 00:07:00.950 Test: blockdev write zeroes read split ...passed 00:07:00.950 Test: blockdev write zeroes read split partial ...passed 00:07:00.950 Test: blockdev reset ...[2024-12-16 22:03:07.158448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.950 [2024-12-16 22:03:07.160874] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:00.950 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.950 passed 00:07:00.950 Test: blockdev write read size > 128k ...passed 00:07:00.950 Test: blockdev write read invalid size ...passed 00:07:00.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.950 Test: blockdev write read max offset ...passed 00:07:00.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.950 Test: blockdev writev readv 8 blocks ...passed 00:07:00.950 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.950 Test: blockdev writev readv block ...passed 00:07:00.950 Test: blockdev writev readv size > 128k ...passed 00:07:00.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.950 Test: blockdev comparev and writev ...[2024-12-16 22:03:07.165063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1404000 len:0x1000 00:07:00.950 [2024-12-16 22:03:07.165103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme passthru rw ...passed 00:07:00.950 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.950 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:07.165546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.950 [2024-12-16 22:03:07.165574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev copy ...passed 00:07:00.950 Suite: bdevio tests on: Nvme1n1 00:07:00.950 Test: blockdev write read block ...passed 00:07:00.950 Test: blockdev write zeroes read block ...passed 00:07:00.950 Test: blockdev write zeroes read no split ...passed 00:07:00.950 Test: blockdev write zeroes read split ...passed 00:07:00.950 Test: blockdev write zeroes read split partial ...passed 00:07:00.950 Test: blockdev reset ...[2024-12-16 22:03:07.186362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.950 [2024-12-16 22:03:07.189479] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:00.950 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.950 passed 00:07:00.950 Test: blockdev write read size > 128k ...passed 00:07:00.950 Test: blockdev write read invalid size ...passed 00:07:00.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.950 Test: blockdev write read max offset ...passed 00:07:00.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.950 Test: blockdev writev readv 8 blocks ...passed 00:07:00.950 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.950 Test: blockdev writev readv block ...passed 00:07:00.950 Test: blockdev writev readv size > 128k ...passed 00:07:00.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.950 Test: blockdev comparev and writev ...[2024-12-16 22:03:07.207323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e803d000 len:0x1000 00:07:00.950 [2024-12-16 22:03:07.207370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme passthru rw ...passed 00:07:00.950 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.950 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:07.209810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.950 [2024-12-16 22:03:07.209941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev copy ...passed 00:07:00.950 Suite: bdevio tests on: Nvme0n1 00:07:00.950 Test: blockdev write read block ...passed 00:07:00.950 Test: blockdev write zeroes read block ...passed 00:07:00.950 Test: blockdev write zeroes read no split ...passed 00:07:00.950 Test: blockdev write zeroes read split ...passed 00:07:00.950 Test: blockdev write zeroes read split partial ...passed 00:07:00.950 Test: blockdev reset ...[2024-12-16 22:03:07.236944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:00.950 passed 00:07:00.950 Test: blockdev write read 8 blocks ...[2024-12-16 22:03:07.238733] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:00.950 passed 00:07:00.950 Test: blockdev write read size > 128k ...passed 00:07:00.950 Test: blockdev write read invalid size ...passed 00:07:00.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.950 Test: blockdev write read max offset ...passed 00:07:00.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.950 Test: blockdev writev readv 8 blocks ...passed 00:07:00.950 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.950 Test: blockdev writev readv block ...passed 00:07:00.950 Test: blockdev writev readv size > 128k ...passed 00:07:00.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.950 Test: blockdev comparev and writev ...passed 00:07:00.950 Test: blockdev nvme passthru rw ...[2024-12-16 22:03:07.251490] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:00.950 separate metadata which is not supported yet. 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:07.252806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:00.950 [2024-12-16 22:03:07.252854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:00.950 passed 00:07:00.950 Test: blockdev nvme admin passthru ...passed 00:07:00.950 Test: blockdev copy ...passed 00:07:00.950 00:07:00.950 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.950 suites 6 6 n/a 0 0 00:07:00.951 tests 138 138 138 0 0 00:07:00.951 asserts 893 893 893 0 n/a 00:07:00.951 00:07:00.951 Elapsed time = 0.411 seconds 00:07:00.951 0 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73653 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73653 ']' 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73653 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73653 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73653' 00:07:00.951 killing process with pid 73653 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73653 00:07:00.951 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73653 00:07:01.209 22:03:07 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.209 00:07:01.209 real 0m1.342s 00:07:01.209 user 0m3.419s 00:07:01.209 sys 0m0.258s 00:07:01.209 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.209 22:03:07 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 ************************************ 00:07:01.209 END TEST bdev_bounds 00:07:01.209 ************************************ 00:07:01.209 22:03:07 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.209 22:03:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.209 22:03:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.209 22:03:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 ************************************ 00:07:01.209 START TEST bdev_nbd 00:07:01.209 ************************************ 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73707 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73707 /var/tmp/spdk-nbd.sock 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73707 ']' 00:07:01.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.209 22:03:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:01.209 [2024-12-16 22:03:07.553095] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:01.209 [2024-12-16 22:03:07.553212] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.467 [2024-12-16 22:03:07.702592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.467 [2024-12-16 22:03:07.721541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.400 1+0 records in 00:07:02.400 1+0 records out 00:07:02.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923177 s, 4.4 MB/s 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.400 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.658 1+0 records in 00:07:02.658 1+0 records out 00:07:02.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108561 s, 3.8 MB/s 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.658 22:03:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.916 1+0 records in 00:07:02.916 1+0 records out 00:07:02.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000958263 s, 4.3 MB/s 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.916 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.235 1+0 records in 00:07:03.235 1+0 records out 00:07:03.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482392 s, 8.5 MB/s 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.235 1+0 records in 00:07:03.235 1+0 records out 00:07:03.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097007 s, 4.2 MB/s 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.235 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.493 1+0 records in 00:07:03.493 1+0 records out 00:07:03.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102038 s, 4.0 MB/s 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.493 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.751 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd0", 00:07:03.751 "bdev_name": "Nvme0n1" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd1", 00:07:03.751 "bdev_name": "Nvme1n1" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd2", 00:07:03.751 "bdev_name": "Nvme2n1" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd3", 00:07:03.751 "bdev_name": "Nvme2n2" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd4", 00:07:03.751 "bdev_name": "Nvme2n3" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd5", 00:07:03.751 "bdev_name": "Nvme3n1" 00:07:03.751 } 00:07:03.751 ]' 00:07:03.751 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:03.751 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd0", 00:07:03.751 "bdev_name": "Nvme0n1" 00:07:03.751 }, 00:07:03.751 { 00:07:03.751 "nbd_device": "/dev/nbd1", 00:07:03.751 "bdev_name": "Nvme1n1" 00:07:03.751 }, 00:07:03.752 { 00:07:03.752 "nbd_device": "/dev/nbd2", 00:07:03.752 "bdev_name": "Nvme2n1" 00:07:03.752 }, 00:07:03.752 { 00:07:03.752 "nbd_device": "/dev/nbd3", 00:07:03.752 "bdev_name": "Nvme2n2" 00:07:03.752 }, 00:07:03.752 { 00:07:03.752 "nbd_device": "/dev/nbd4", 00:07:03.752 "bdev_name": "Nvme2n3" 00:07:03.752 }, 00:07:03.752 { 00:07:03.752 "nbd_device": "/dev/nbd5", 00:07:03.752 "bdev_name": "Nvme3n1" 00:07:03.752 } 00:07:03.752 ]' 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.752 22:03:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.010 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.268 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.526 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.784 22:03:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.784 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.041 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:05.299 /dev/nbd0 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.299 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.300 1+0 records in 00:07:05.300 1+0 records out 00:07:05.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973191 s, 4.2 MB/s 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.300 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:05.557 /dev/nbd1 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.557 1+0 records in 00:07:05.557 1+0 records out 00:07:05.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788216 s, 5.2 MB/s 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.557 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.558 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.558 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.558 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.558 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:05.558 /dev/nbd10 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.815 1+0 records in 00:07:05.815 1+0 records out 00:07:05.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101637 s, 4.0 MB/s 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.815 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.816 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.816 22:03:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:05.816 /dev/nbd11 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.816 1+0 records in 00:07:05.816 1+0 records out 00:07:05.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151725 s, 2.7 MB/s 00:07:05.816 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:06.073 /dev/nbd12 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.073 1+0 records in 00:07:06.073 1+0 records out 00:07:06.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864031 s, 4.7 MB/s 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.073 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:06.331 /dev/nbd13 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.331 1+0 records in 00:07:06.331 1+0 records out 00:07:06.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000891656 s, 4.6 MB/s 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.331 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd0", 00:07:06.590 "bdev_name": "Nvme0n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd1", 00:07:06.590 "bdev_name": "Nvme1n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd10", 00:07:06.590 "bdev_name": "Nvme2n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd11", 00:07:06.590 "bdev_name": "Nvme2n2" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd12", 00:07:06.590 "bdev_name": "Nvme2n3" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd13", 00:07:06.590 "bdev_name": "Nvme3n1" 00:07:06.590 } 00:07:06.590 ]' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd0", 00:07:06.590 "bdev_name": "Nvme0n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd1", 00:07:06.590 "bdev_name": "Nvme1n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd10", 00:07:06.590 "bdev_name": "Nvme2n1" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd11", 00:07:06.590 "bdev_name": "Nvme2n2" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd12", 00:07:06.590 "bdev_name": "Nvme2n3" 00:07:06.590 }, 00:07:06.590 { 00:07:06.590 "nbd_device": "/dev/nbd13", 00:07:06.590 "bdev_name": "Nvme3n1" 00:07:06.590 } 00:07:06.590 ]' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.590 /dev/nbd1 00:07:06.590 /dev/nbd10 00:07:06.590 /dev/nbd11 00:07:06.590 /dev/nbd12 00:07:06.590 /dev/nbd13' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.590 /dev/nbd1 00:07:06.590 /dev/nbd10 00:07:06.590 /dev/nbd11 00:07:06.590 /dev/nbd12 00:07:06.590 /dev/nbd13' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:06.590 256+0 records in 00:07:06.590 256+0 records out 00:07:06.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105301 s, 99.6 MB/s 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.590 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.849 256+0 records in 00:07:06.849 256+0 records out 00:07:06.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0944828 s, 11.1 MB/s 00:07:06.849 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.849 22:03:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.849 256+0 records in 00:07:06.849 256+0 records out 00:07:06.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.061283 s, 17.1 MB/s 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.849 256+0 records in 00:07:06.849 256+0 records out 00:07:06.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0596586 s, 17.6 MB/s 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.849 256+0 records in 00:07:06.849 256+0 records out 00:07:06.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0592999 s, 17.7 MB/s 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.849 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:07.107 256+0 records in 00:07:07.107 256+0 records out 00:07:07.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0601325 s, 17.4 MB/s 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:07.107 256+0 records in 00:07:07.107 256+0 records out 00:07:07.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0603528 s, 17.4 MB/s 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.107 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.364 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.622 22:03:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.880 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.880 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.880 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.880 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.880 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.881 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.881 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.881 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.881 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.881 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.139 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.397 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:08.656 22:03:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:08.917 malloc_lvol_verify 00:07:08.917 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:08.917 bc4f52f8-5cc0-43ae-aa77-1f8f02ff03bf 00:07:08.917 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:09.175 d82c3ead-022d-49fb-927c-12f2413ec745 00:07:09.175 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:09.433 /dev/nbd0 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:09.433 mke2fs 1.47.0 (5-Feb-2023) 00:07:09.433 Discarding device blocks: 0/4096 done 00:07:09.433 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:09.433 00:07:09.433 Allocating group tables: 0/1 done 00:07:09.433 Writing inode tables: 0/1 done 00:07:09.433 Creating journal (1024 blocks): done 00:07:09.433 Writing superblocks and filesystem accounting information: 0/1 done 00:07:09.433 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.433 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73707 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73707 ']' 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73707 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73707 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73707' 00:07:09.691 killing process with pid 73707 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73707 00:07:09.691 22:03:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73707 00:07:09.950 22:03:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:09.950 00:07:09.950 real 0m8.561s 00:07:09.950 user 0m12.593s 00:07:09.950 sys 0m2.819s 00:07:09.950 22:03:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.950 22:03:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.950 ************************************ 00:07:09.950 END TEST bdev_nbd 00:07:09.950 ************************************ 00:07:09.950 22:03:16 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:09.950 22:03:16 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:09.950 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.951 22:03:16 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.951 22:03:16 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.951 22:03:16 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.951 22:03:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:09.951 22:03:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.951 22:03:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.951 ************************************ 00:07:09.951 START TEST bdev_verify 00:07:09.951 ************************************ 00:07:09.951 22:03:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.951 [2024-12-16 22:03:16.146984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:09.951 [2024-12-16 22:03:16.147093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74066 ] 00:07:10.209 [2024-12-16 22:03:16.299061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.209 [2024-12-16 22:03:16.316479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.209 [2024-12-16 22:03:16.316503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.467 Running I/O for 5 seconds... 00:07:12.770 23808.00 IOPS, 93.00 MiB/s [2024-12-16T22:03:20.051Z] 23936.00 IOPS, 93.50 MiB/s [2024-12-16T22:03:20.984Z] 23573.33 IOPS, 92.08 MiB/s [2024-12-16T22:03:21.917Z] 23424.00 IOPS, 91.50 MiB/s [2024-12-16T22:03:21.917Z] 23872.00 IOPS, 93.25 MiB/s 00:07:15.570 Latency(us) 00:07:15.570 [2024-12-16T22:03:21.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0xbd0bd 00:07:15.570 Nvme0n1 : 5.03 2059.83 8.05 0.00 0.00 61894.51 9830.40 77836.60 00:07:15.570 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:15.570 Nvme0n1 : 5.04 1880.95 7.35 0.00 0.00 67745.73 13308.85 123409.33 00:07:15.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0xa0000 00:07:15.570 Nvme1n1 : 5.06 2063.21 8.06 0.00 0.00 61736.08 5343.70 71787.13 00:07:15.570 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0xa0000 length 0xa0000 00:07:15.570 Nvme1n1 : 5.08 1891.02 7.39 0.00 0.00 67419.84 11040.30 113730.17 00:07:15.570 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0x80000 00:07:15.570 Nvme2n1 : 5.07 2071.34 8.09 0.00 0.00 61517.93 8822.15 67350.84 00:07:15.570 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x80000 length 0x80000 00:07:15.570 Nvme2n1 : 5.08 1890.51 7.38 0.00 0.00 67318.68 9628.75 112923.57 00:07:15.570 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0x80000 00:07:15.570 Nvme2n2 : 5.07 2070.36 8.09 0.00 0.00 61437.19 10082.46 67350.84 00:07:15.570 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x80000 length 0x80000 00:07:15.570 Nvme2n2 : 5.08 1890.01 7.38 0.00 0.00 67207.58 9981.64 113730.17 00:07:15.570 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0x80000 00:07:15.570 Nvme2n3 : 5.07 2069.87 8.09 0.00 0.00 61327.23 9830.40 72997.02 00:07:15.570 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x80000 length 0x80000 00:07:15.570 Nvme2n3 : 5.08 1888.92 7.38 0.00 0.00 67096.12 10939.47 112923.57 00:07:15.570 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x0 length 0x20000 00:07:15.570 Nvme3n1 : 5.07 2069.41 8.08 0.00 0.00 61228.75 8368.44 78239.90 00:07:15.570 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.570 Verification LBA range: start 0x20000 length 0x20000 00:07:15.570 Nvme3n1 : 5.09 1887.82 7.37 0.00 0.00 67017.98 7259.37 121796.14 00:07:15.570 [2024-12-16T22:03:21.917Z] =================================================================================================================== 00:07:15.570 [2024-12-16T22:03:21.917Z] Total : 23733.25 92.71 0.00 0.00 64283.72 5343.70 123409.33 00:07:16.167 00:07:16.167 real 0m6.302s 00:07:16.167 user 0m11.967s 00:07:16.167 sys 0m0.185s 00:07:16.167 22:03:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.167 ************************************ 00:07:16.167 END TEST bdev_verify 00:07:16.167 ************************************ 00:07:16.167 22:03:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:16.167 22:03:22 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.167 22:03:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:16.167 22:03:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.167 22:03:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.167 ************************************ 00:07:16.167 START TEST bdev_verify_big_io 00:07:16.167 ************************************ 00:07:16.167 22:03:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.454 [2024-12-16 22:03:22.494580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:16.454 [2024-12-16 22:03:22.494693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74153 ] 00:07:16.454 [2024-12-16 22:03:22.649905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.454 [2024-12-16 22:03:22.667099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.454 [2024-12-16 22:03:22.667128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.019 Running I/O for 5 seconds... 00:07:22.194 1278.00 IOPS, 79.88 MiB/s [2024-12-16T22:03:29.106Z] 2714.00 IOPS, 169.62 MiB/s [2024-12-16T22:03:29.364Z] 3122.67 IOPS, 195.17 MiB/s 00:07:23.017 Latency(us) 00:07:23.017 [2024-12-16T22:03:29.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.017 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x0 length 0xbd0b 00:07:23.017 Nvme0n1 : 5.67 124.14 7.76 0.00 0.00 972102.25 9830.40 1187310.67 00:07:23.017 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:23.017 Nvme0n1 : 5.68 135.31 8.46 0.00 0.00 918232.29 16636.06 1019538.51 00:07:23.017 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x0 length 0xa000 00:07:23.017 Nvme1n1 : 5.77 130.38 8.15 0.00 0.00 910814.18 108890.58 1013085.74 00:07:23.017 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0xa000 length 0xa000 00:07:23.017 Nvme1n1 : 5.68 135.09 8.44 0.00 0.00 888450.21 81869.59 1006632.96 00:07:23.017 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x0 length 0x8000 00:07:23.017 Nvme2n1 : 5.77 124.45 7.78 0.00 0.00 920677.31 93565.24 1651910.50 00:07:23.017 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x8000 length 0x8000 00:07:23.017 Nvme2n1 : 5.77 137.88 8.62 0.00 0.00 846153.61 92758.65 1045349.61 00:07:23.017 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x0 length 0x8000 00:07:23.017 Nvme2n2 : 5.91 133.06 8.32 0.00 0.00 829703.46 56865.08 1871304.86 00:07:23.017 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x8000 length 0x8000 00:07:23.017 Nvme2n2 : 5.81 143.10 8.94 0.00 0.00 796634.43 38313.35 745295.56 00:07:23.017 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x0 length 0x8000 00:07:23.017 Nvme2n3 : 5.94 155.63 9.73 0.00 0.00 689230.93 8318.03 1477685.56 00:07:23.017 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.017 Verification LBA range: start 0x8000 length 0x8000 00:07:23.018 Nvme2n3 : 5.89 148.62 9.29 0.00 0.00 743874.80 41741.39 884030.23 00:07:23.018 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:23.018 Verification LBA range: start 0x0 length 0x2000 00:07:23.018 Nvme3n1 : 6.03 216.56 13.54 0.00 0.00 483996.47 115.79 1755154.90 00:07:23.018 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:23.018 Verification LBA range: start 0x2000 length 0x2000 00:07:23.018 Nvme3n1 : 5.90 162.71 10.17 0.00 0.00 663521.57 1033.45 871124.68 00:07:23.018 [2024-12-16T22:03:29.365Z] =================================================================================================================== 00:07:23.018 [2024-12-16T22:03:29.365Z] Total : 1746.93 109.18 0.00 0.00 781561.36 115.79 1871304.86 00:07:23.584 00:07:23.584 real 0m7.248s 00:07:23.584 user 0m13.840s 00:07:23.584 sys 0m0.195s 00:07:23.584 ************************************ 00:07:23.584 END TEST bdev_verify_big_io 00:07:23.584 ************************************ 00:07:23.584 22:03:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.584 22:03:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:23.584 22:03:29 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.584 22:03:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:23.584 22:03:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.584 22:03:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.584 ************************************ 00:07:23.584 START TEST bdev_write_zeroes 00:07:23.584 ************************************ 00:07:23.584 22:03:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.584 [2024-12-16 22:03:29.784291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:23.584 [2024-12-16 22:03:29.784401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74257 ] 00:07:23.842 [2024-12-16 22:03:29.938116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.842 [2024-12-16 22:03:29.954192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.099 Running I/O for 1 seconds... 00:07:25.032 76032.00 IOPS, 297.00 MiB/s 00:07:25.032 Latency(us) 00:07:25.032 [2024-12-16T22:03:31.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.032 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme0n1 : 1.02 12579.58 49.14 0.00 0.00 10155.25 8368.44 27021.00 00:07:25.032 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme1n1 : 1.02 12565.43 49.08 0.00 0.00 10153.81 8519.68 26819.35 00:07:25.032 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme2n1 : 1.02 12551.30 49.03 0.00 0.00 10138.40 8368.44 27021.00 00:07:25.032 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme2n2 : 1.03 12537.18 48.97 0.00 0.00 10133.16 8418.86 25710.28 00:07:25.032 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme2n3 : 1.03 12523.19 48.92 0.00 0.00 10109.54 8418.86 24500.38 00:07:25.032 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:25.032 Nvme3n1 : 1.03 12509.24 48.86 0.00 0.00 10104.98 8217.21 26012.75 00:07:25.032 [2024-12-16T22:03:31.379Z] =================================================================================================================== 00:07:25.032 [2024-12-16T22:03:31.379Z] Total : 75265.92 294.01 0.00 0.00 10132.53 8217.21 27021.00 00:07:25.290 00:07:25.290 real 0m1.803s 00:07:25.290 user 0m1.539s 00:07:25.290 sys 0m0.155s 00:07:25.290 22:03:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.290 ************************************ 00:07:25.290 END TEST bdev_write_zeroes 00:07:25.290 ************************************ 00:07:25.290 22:03:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.290 22:03:31 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.290 22:03:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.290 22:03:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.290 22:03:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.290 ************************************ 00:07:25.290 START TEST bdev_json_nonenclosed 00:07:25.290 ************************************ 00:07:25.290 22:03:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.290 [2024-12-16 22:03:31.629408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:25.290 [2024-12-16 22:03:31.629518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74288 ] 00:07:25.549 [2024-12-16 22:03:31.786774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.549 [2024-12-16 22:03:31.805039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.549 [2024-12-16 22:03:31.805122] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.549 [2024-12-16 22:03:31.805136] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.549 [2024-12-16 22:03:31.805147] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.549 00:07:25.549 real 0m0.294s 00:07:25.549 user 0m0.108s 00:07:25.549 sys 0m0.083s 00:07:25.549 22:03:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.549 22:03:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:25.549 ************************************ 00:07:25.549 END TEST bdev_json_nonenclosed 00:07:25.549 ************************************ 00:07:25.807 22:03:31 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.807 22:03:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.807 22:03:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.807 22:03:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.807 ************************************ 00:07:25.807 START TEST bdev_json_nonarray 00:07:25.807 ************************************ 00:07:25.807 22:03:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.807 [2024-12-16 22:03:31.964000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:25.807 [2024-12-16 22:03:31.964109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74319 ] 00:07:25.807 [2024-12-16 22:03:32.122157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.807 [2024-12-16 22:03:32.140455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.807 [2024-12-16 22:03:32.140536] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:25.807 [2024-12-16 22:03:32.140554] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.807 [2024-12-16 22:03:32.140565] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.065 00:07:26.065 real 0m0.290s 00:07:26.065 user 0m0.114s 00:07:26.065 sys 0m0.074s 00:07:26.065 22:03:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.065 22:03:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.065 ************************************ 00:07:26.065 END TEST bdev_json_nonarray 00:07:26.065 ************************************ 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:26.065 22:03:32 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:26.065 00:07:26.065 real 0m28.735s 00:07:26.065 user 0m46.061s 00:07:26.065 sys 0m4.555s 00:07:26.065 22:03:32 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.065 22:03:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.065 ************************************ 00:07:26.065 END TEST blockdev_nvme 00:07:26.065 ************************************ 00:07:26.065 22:03:32 -- spdk/autotest.sh@209 -- # uname -s 00:07:26.065 22:03:32 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:26.065 22:03:32 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.065 22:03:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.065 22:03:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.065 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.065 ************************************ 00:07:26.065 START TEST blockdev_nvme_gpt 00:07:26.065 ************************************ 00:07:26.065 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:26.066 * Looking for test storage... 00:07:26.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.066 22:03:32 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.066 --rc genhtml_branch_coverage=1 00:07:26.066 --rc genhtml_function_coverage=1 00:07:26.066 --rc genhtml_legend=1 00:07:26.066 --rc geninfo_all_blocks=1 00:07:26.066 --rc geninfo_unexecuted_blocks=1 00:07:26.066 00:07:26.066 ' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.066 --rc genhtml_branch_coverage=1 00:07:26.066 --rc genhtml_function_coverage=1 00:07:26.066 --rc genhtml_legend=1 00:07:26.066 --rc geninfo_all_blocks=1 00:07:26.066 --rc geninfo_unexecuted_blocks=1 00:07:26.066 00:07:26.066 ' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.066 --rc genhtml_branch_coverage=1 00:07:26.066 --rc genhtml_function_coverage=1 00:07:26.066 --rc genhtml_legend=1 00:07:26.066 --rc geninfo_all_blocks=1 00:07:26.066 --rc geninfo_unexecuted_blocks=1 00:07:26.066 00:07:26.066 ' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.066 --rc genhtml_branch_coverage=1 00:07:26.066 --rc genhtml_function_coverage=1 00:07:26.066 --rc genhtml_legend=1 00:07:26.066 --rc geninfo_all_blocks=1 00:07:26.066 --rc geninfo_unexecuted_blocks=1 00:07:26.066 00:07:26.066 ' 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:26.066 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74392 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74392 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74392 ']' 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.324 22:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.324 22:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.324 [2024-12-16 22:03:32.487896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:26.324 [2024-12-16 22:03:32.488007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74392 ] 00:07:26.324 [2024-12-16 22:03:32.648236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.324 [2024-12-16 22:03:32.666624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.256 22:03:33 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.256 22:03:33 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:27.256 22:03:33 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:27.256 22:03:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:27.256 22:03:33 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:27.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:27.515 Waiting for block devices as requested 00:07:27.515 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.515 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.515 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:27.772 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.083 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:33.083 22:03:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:33.083 BYT; 00:07:33.083 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:33.083 BYT; 00:07:33.083 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:33.083 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:33.083 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.084 22:03:39 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.084 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:33.084 22:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:34.018 The operation has completed successfully. 00:07:34.018 22:03:40 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:34.952 The operation has completed successfully. 00:07:34.952 22:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:35.777 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.777 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 [] 00:07:35.777 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:35.777 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:35.777 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.777 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:36.343 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:36.344 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f8b8c2ec-5bbf-496a-8ece-770c27808a82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f8b8c2ec-5bbf-496a-8ece-770c27808a82",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "99a87b58-88ff-4398-8db1-27b0490c9916"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "99a87b58-88ff-4398-8db1-27b0490c9916",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8ac0ded7-36a5-4834-8236-35bbe95336a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ac0ded7-36a5-4834-8236-35bbe95336a6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2d2519c2-ab19-4735-8dbd-96b8efc4860d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d2519c2-ab19-4735-8dbd-96b8efc4860d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d4ca92d8-ccf1-4a13-afd6-fc4ca8ac173c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d4ca92d8-ccf1-4a13-afd6-fc4ca8ac173c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:36.344 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:36.344 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:36.344 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:36.344 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 74392 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74392 ']' 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74392 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74392 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.344 killing process with pid 74392 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74392' 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74392 00:07:36.344 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74392 00:07:36.603 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:36.603 22:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:36.603 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:36.603 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.603 22:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.603 ************************************ 00:07:36.603 START TEST bdev_hello_world 00:07:36.603 ************************************ 00:07:36.603 22:03:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:36.603 [2024-12-16 22:03:42.876316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:36.603 [2024-12-16 22:03:42.876435] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74995 ] 00:07:36.861 [2024-12-16 22:03:43.027446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.861 [2024-12-16 22:03:43.043982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.118 [2024-12-16 22:03:43.402247] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:37.118 [2024-12-16 22:03:43.402300] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:37.119 [2024-12-16 22:03:43.402321] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:37.119 [2024-12-16 22:03:43.404384] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:37.119 [2024-12-16 22:03:43.405355] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:37.119 [2024-12-16 22:03:43.405397] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:37.119 [2024-12-16 22:03:43.405920] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:37.119 00:07:37.119 [2024-12-16 22:03:43.405952] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:37.376 00:07:37.376 real 0m0.735s 00:07:37.376 user 0m0.474s 00:07:37.376 sys 0m0.158s 00:07:37.376 22:03:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.376 ************************************ 00:07:37.376 END TEST bdev_hello_world 00:07:37.376 ************************************ 00:07:37.376 22:03:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:37.376 22:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:37.376 22:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.376 22:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.376 22:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.376 ************************************ 00:07:37.376 START TEST bdev_bounds 00:07:37.379 ************************************ 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75026 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:37.379 Process bdevio pid: 75026 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75026' 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75026 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 75026 ']' 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.379 22:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 [2024-12-16 22:03:43.664879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:37.379 [2024-12-16 22:03:43.665002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75026 ] 00:07:37.636 [2024-12-16 22:03:43.823189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.637 [2024-12-16 22:03:43.844219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.637 [2024-12-16 22:03:43.844778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.637 [2024-12-16 22:03:43.844868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.202 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.202 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:38.202 22:03:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:38.461 I/O targets: 00:07:38.461 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:38.461 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:38.461 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:38.461 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.461 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.461 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:38.461 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:38.461 00:07:38.461 00:07:38.461 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.461 http://cunit.sourceforge.net/ 00:07:38.461 00:07:38.461 00:07:38.461 Suite: bdevio tests on: Nvme3n1 00:07:38.461 Test: blockdev write read block ...passed 00:07:38.461 Test: blockdev write zeroes read block ...passed 00:07:38.461 Test: blockdev write zeroes read no split ...passed 00:07:38.461 Test: blockdev write zeroes read split ...passed 00:07:38.461 Test: blockdev write zeroes read split partial ...passed 00:07:38.461 Test: blockdev reset ...[2024-12-16 22:03:44.612390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:38.461 [2024-12-16 22:03:44.615732] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:38.461 passed 00:07:38.461 Test: blockdev write read 8 blocks ...passed 00:07:38.461 Test: blockdev write read size > 128k ...passed 00:07:38.461 Test: blockdev write read invalid size ...passed 00:07:38.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.461 Test: blockdev write read max offset ...passed 00:07:38.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.461 Test: blockdev writev readv 8 blocks ...passed 00:07:38.461 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.461 Test: blockdev writev readv block ...passed 00:07:38.461 Test: blockdev writev readv size > 128k ...passed 00:07:38.461 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.461 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.632872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c900e000 len:0x1000 00:07:38.461 [2024-12-16 22:03:44.632919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.461 passed 00:07:38.461 Test: blockdev nvme passthru rw ...passed 00:07:38.461 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:44.635074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.461 [2024-12-16 22:03:44.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.461 passed 00:07:38.461 Test: blockdev nvme admin passthru ...passed 00:07:38.461 Test: blockdev copy ...passed 00:07:38.461 Suite: bdevio tests on: Nvme2n3 00:07:38.461 Test: blockdev write read block ...passed 00:07:38.461 Test: blockdev write zeroes read block ...passed 00:07:38.461 Test: blockdev write zeroes read no split ...passed 00:07:38.461 Test: blockdev write zeroes read split ...passed 00:07:38.461 Test: blockdev write zeroes read split partial ...passed 00:07:38.461 Test: blockdev reset ...[2024-12-16 22:03:44.662585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.461 [2024-12-16 22:03:44.665618] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.461 passed 00:07:38.461 Test: blockdev write read 8 blocks ...passed 00:07:38.461 Test: blockdev write read size > 128k ...passed 00:07:38.461 Test: blockdev write read invalid size ...passed 00:07:38.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.461 Test: blockdev write read max offset ...passed 00:07:38.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.461 Test: blockdev writev readv 8 blocks ...passed 00:07:38.461 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.461 Test: blockdev writev readv block ...passed 00:07:38.461 Test: blockdev writev readv size > 128k ...passed 00:07:38.461 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.461 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.679321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9006000 len:0x1000 00:07:38.462 [2024-12-16 22:03:44.679366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme passthru rw ...passed 00:07:38.462 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:44.681629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:44.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev copy ...passed 00:07:38.462 Suite: bdevio tests on: Nvme2n2 00:07:38.462 Test: blockdev write read block ...passed 00:07:38.462 Test: blockdev write zeroes read block ...passed 00:07:38.462 Test: blockdev write zeroes read no split ...passed 00:07:38.462 Test: blockdev write zeroes read split ...passed 00:07:38.462 Test: blockdev write zeroes read split partial ...passed 00:07:38.462 Test: blockdev reset ...[2024-12-16 22:03:44.700896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.462 [2024-12-16 22:03:44.703728] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.462 passed 00:07:38.462 Test: blockdev write read 8 blocks ...passed 00:07:38.462 Test: blockdev write read size > 128k ...passed 00:07:38.462 Test: blockdev write read invalid size ...passed 00:07:38.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.462 Test: blockdev write read max offset ...passed 00:07:38.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.462 Test: blockdev writev readv 8 blocks ...passed 00:07:38.462 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.462 Test: blockdev writev readv block ...passed 00:07:38.462 Test: blockdev writev readv size > 128k ...passed 00:07:38.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.462 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.718961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9008000 len:0x1000 00:07:38.462 [2024-12-16 22:03:44.719005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme passthru rw ...passed 00:07:38.462 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:44.720956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme admin passthru ...[2024-12-16 22:03:44.720984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev copy ...passed 00:07:38.462 Suite: bdevio tests on: Nvme2n1 00:07:38.462 Test: blockdev write read block ...passed 00:07:38.462 Test: blockdev write zeroes read block ...passed 00:07:38.462 Test: blockdev write zeroes read no split ...passed 00:07:38.462 Test: blockdev write zeroes read split ...passed 00:07:38.462 Test: blockdev write zeroes read split partial ...passed 00:07:38.462 Test: blockdev reset ...[2024-12-16 22:03:44.749043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:38.462 passed 00:07:38.462 Test: blockdev write read 8 blocks ...[2024-12-16 22:03:44.751655] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:38.462 passed 00:07:38.462 Test: blockdev write read size > 128k ...passed 00:07:38.462 Test: blockdev write read invalid size ...passed 00:07:38.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.462 Test: blockdev write read max offset ...passed 00:07:38.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.462 Test: blockdev writev readv 8 blocks ...passed 00:07:38.462 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.462 Test: blockdev writev readv block ...passed 00:07:38.462 Test: blockdev writev readv size > 128k ...passed 00:07:38.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.462 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.756943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e983d000 len:0x1000 00:07:38.462 [2024-12-16 22:03:44.756979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme passthru rw ...passed 00:07:38.462 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:44.757720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:38.462 [2024-12-16 22:03:44.757754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme admin passthru ...passed 00:07:38.462 Test: blockdev copy ...passed 00:07:38.462 Suite: bdevio tests on: Nvme1n1p2 00:07:38.462 Test: blockdev write read block ...passed 00:07:38.462 Test: blockdev write zeroes read block ...passed 00:07:38.462 Test: blockdev write zeroes read no split ...passed 00:07:38.462 Test: blockdev write zeroes read split ...passed 00:07:38.462 Test: blockdev write zeroes read split partial ...passed 00:07:38.462 Test: blockdev reset ...[2024-12-16 22:03:44.772286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:38.462 [2024-12-16 22:03:44.773759] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:38.462 passed 00:07:38.462 Test: blockdev write read 8 blocks ...passed 00:07:38.462 Test: blockdev write read size > 128k ...passed 00:07:38.462 Test: blockdev write read invalid size ...passed 00:07:38.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.462 Test: blockdev write read max offset ...passed 00:07:38.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.462 Test: blockdev writev readv 8 blocks ...passed 00:07:38.462 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.462 Test: blockdev writev readv block ...passed 00:07:38.462 Test: blockdev writev readv size > 128k ...passed 00:07:38.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.462 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.781897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e9839000 len:0x1000 00:07:38.462 [2024-12-16 22:03:44.782003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.462 passed 00:07:38.462 Test: blockdev nvme passthru rw ...passed 00:07:38.462 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.462 Test: blockdev nvme admin passthru ...passed 00:07:38.462 Test: blockdev copy ...passed 00:07:38.462 Suite: bdevio tests on: Nvme1n1p1 00:07:38.462 Test: blockdev write read block ...passed 00:07:38.462 Test: blockdev write zeroes read block ...passed 00:07:38.462 Test: blockdev write zeroes read no split ...passed 00:07:38.462 Test: blockdev write zeroes read split ...passed 00:07:38.462 Test: blockdev write zeroes read split partial ...passed 00:07:38.462 Test: blockdev reset ...[2024-12-16 22:03:44.798499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:38.462 [2024-12-16 22:03:44.802010] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:38.462 passed 00:07:38.462 Test: blockdev write read 8 blocks ...passed 00:07:38.462 Test: blockdev write read size > 128k ...passed 00:07:38.462 Test: blockdev write read invalid size ...passed 00:07:38.462 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.462 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.462 Test: blockdev write read max offset ...passed 00:07:38.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.720 Test: blockdev writev readv 8 blocks ...passed 00:07:38.720 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.720 Test: blockdev writev readv block ...passed 00:07:38.720 Test: blockdev writev readv size > 128k ...passed 00:07:38.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.720 Test: blockdev comparev and writev ...[2024-12-16 22:03:44.817479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e9835000 len:0x1000 00:07:38.720 [2024-12-16 22:03:44.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:38.720 passed 00:07:38.720 Test: blockdev nvme passthru rw ...passed 00:07:38.720 Test: blockdev nvme passthru vendor specific ...passed 00:07:38.720 Test: blockdev nvme admin passthru ...passed 00:07:38.720 Test: blockdev copy ...passed 00:07:38.720 Suite: bdevio tests on: Nvme0n1 00:07:38.720 Test: blockdev write read block ...passed 00:07:38.720 Test: blockdev write zeroes read block ...passed 00:07:38.720 Test: blockdev write zeroes read no split ...passed 00:07:38.720 Test: blockdev write zeroes read split ...passed 00:07:38.720 Test: blockdev write zeroes read split partial ...passed 00:07:38.720 Test: blockdev reset ...[2024-12-16 22:03:44.837751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:38.720 [2024-12-16 22:03:44.839291] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:38.720 passed 00:07:38.720 Test: blockdev write read 8 blocks ...passed 00:07:38.720 Test: blockdev write read size > 128k ...passed 00:07:38.720 Test: blockdev write read invalid size ...passed 00:07:38.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:38.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:38.721 Test: blockdev write read max offset ...passed 00:07:38.721 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:38.721 Test: blockdev writev readv 8 blocks ...passed 00:07:38.721 Test: blockdev writev readv 30 x 1block ...passed 00:07:38.721 Test: blockdev writev readv block ...passed 00:07:38.721 Test: blockdev writev readv size > 128k ...passed 00:07:38.721 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:38.721 Test: blockdev comparev and writev ...passed 00:07:38.721 Test: blockdev nvme passthru rw ...[2024-12-16 22:03:44.846936] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:38.721 separate metadata which is not supported yet. 00:07:38.721 passed 00:07:38.721 Test: blockdev nvme passthru vendor specific ...[2024-12-16 22:03:44.848003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:38.721 [2024-12-16 22:03:44.848114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:38.721 passed 00:07:38.721 Test: blockdev nvme admin passthru ...passed 00:07:38.721 Test: blockdev copy ...passed 00:07:38.721 00:07:38.721 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.721 suites 7 7 n/a 0 0 00:07:38.721 tests 161 161 161 0 0 00:07:38.721 asserts 1025 1025 1025 0 n/a 00:07:38.721 00:07:38.721 Elapsed time = 0.585 seconds 00:07:38.721 0 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75026 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 75026 ']' 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 75026 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75026 00:07:38.721 killing process with pid 75026 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75026' 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 75026 00:07:38.721 22:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 75026 00:07:38.721 22:03:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:38.721 00:07:38.721 real 0m1.434s 00:07:38.721 user 0m3.585s 00:07:38.721 sys 0m0.281s 00:07:38.721 22:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.721 22:03:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 ************************************ 00:07:38.721 END TEST bdev_bounds 00:07:38.721 ************************************ 00:07:38.979 22:03:45 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:38.979 22:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.979 22:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.979 22:03:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 ************************************ 00:07:38.979 START TEST bdev_nbd 00:07:38.979 ************************************ 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75081 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75081 /var/tmp/spdk-nbd.sock 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 75081 ']' 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:38.979 [2024-12-16 22:03:45.162769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:38.979 [2024-12-16 22:03:45.163012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.979 [2024-12-16 22:03:45.318828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.237 [2024-12-16 22:03:45.337691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:39.802 22:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:39.802 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.060 1+0 records in 00:07:40.060 1+0 records out 00:07:40.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689071 s, 5.9 MB/s 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.060 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.319 1+0 records in 00:07:40.319 1+0 records out 00:07:40.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836972 s, 4.9 MB/s 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.319 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.577 1+0 records in 00:07:40.577 1+0 records out 00:07:40.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882433 s, 4.6 MB/s 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.577 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.835 1+0 records in 00:07:40.835 1+0 records out 00:07:40.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106309 s, 3.9 MB/s 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:40.835 22:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:40.835 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.835 1+0 records in 00:07:40.835 1+0 records out 00:07:40.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902618 s, 4.5 MB/s 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.836 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.094 1+0 records in 00:07:41.094 1+0 records out 00:07:41.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960814 s, 4.3 MB/s 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.094 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.351 1+0 records in 00:07:41.351 1+0 records out 00:07:41.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712175 s, 5.8 MB/s 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.351 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd0", 00:07:41.609 "bdev_name": "Nvme0n1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd1", 00:07:41.609 "bdev_name": "Nvme1n1p1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd2", 00:07:41.609 "bdev_name": "Nvme1n1p2" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd3", 00:07:41.609 "bdev_name": "Nvme2n1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd4", 00:07:41.609 "bdev_name": "Nvme2n2" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd5", 00:07:41.609 "bdev_name": "Nvme2n3" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd6", 00:07:41.609 "bdev_name": "Nvme3n1" 00:07:41.609 } 00:07:41.609 ]' 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd0", 00:07:41.609 "bdev_name": "Nvme0n1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd1", 00:07:41.609 "bdev_name": "Nvme1n1p1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd2", 00:07:41.609 "bdev_name": "Nvme1n1p2" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd3", 00:07:41.609 "bdev_name": "Nvme2n1" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd4", 00:07:41.609 "bdev_name": "Nvme2n2" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd5", 00:07:41.609 "bdev_name": "Nvme2n3" 00:07:41.609 }, 00:07:41.609 { 00:07:41.609 "nbd_device": "/dev/nbd6", 00:07:41.609 "bdev_name": "Nvme3n1" 00:07:41.609 } 00:07:41.609 ]' 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.609 22:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.867 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:42.190 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.449 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.706 22:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:42.706 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.964 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.222 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:43.479 /dev/nbd0 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.479 1+0 records in 00:07:43.479 1+0 records out 00:07:43.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049711 s, 8.2 MB/s 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.479 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:43.736 /dev/nbd1 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.736 1+0 records in 00:07:43.736 1+0 records out 00:07:43.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347004 s, 11.8 MB/s 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.736 22:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:43.736 /dev/nbd10 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.994 1+0 records in 00:07:43.994 1+0 records out 00:07:43.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349801 s, 11.7 MB/s 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:43.994 /dev/nbd11 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.994 1+0 records in 00:07:43.994 1+0 records out 00:07:43.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328657 s, 12.5 MB/s 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.994 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:44.252 /dev/nbd12 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.252 1+0 records in 00:07:44.252 1+0 records out 00:07:44.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371037 s, 11.0 MB/s 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.252 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:44.511 /dev/nbd13 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.511 1+0 records in 00:07:44.511 1+0 records out 00:07:44.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601316 s, 6.8 MB/s 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.511 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:44.769 /dev/nbd14 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.769 1+0 records in 00:07:44.769 1+0 records out 00:07:44.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719401 s, 5.7 MB/s 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.769 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:44.770 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.770 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.770 22:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd0", 00:07:45.027 "bdev_name": "Nvme0n1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd1", 00:07:45.027 "bdev_name": "Nvme1n1p1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd10", 00:07:45.027 "bdev_name": "Nvme1n1p2" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd11", 00:07:45.027 "bdev_name": "Nvme2n1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd12", 00:07:45.027 "bdev_name": "Nvme2n2" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd13", 00:07:45.027 "bdev_name": "Nvme2n3" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd14", 00:07:45.027 "bdev_name": "Nvme3n1" 00:07:45.027 } 00:07:45.027 ]' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd0", 00:07:45.027 "bdev_name": "Nvme0n1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd1", 00:07:45.027 "bdev_name": "Nvme1n1p1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd10", 00:07:45.027 "bdev_name": "Nvme1n1p2" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd11", 00:07:45.027 "bdev_name": "Nvme2n1" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd12", 00:07:45.027 "bdev_name": "Nvme2n2" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd13", 00:07:45.027 "bdev_name": "Nvme2n3" 00:07:45.027 }, 00:07:45.027 { 00:07:45.027 "nbd_device": "/dev/nbd14", 00:07:45.027 "bdev_name": "Nvme3n1" 00:07:45.027 } 00:07:45.027 ]' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.027 /dev/nbd1 00:07:45.027 /dev/nbd10 00:07:45.027 /dev/nbd11 00:07:45.027 /dev/nbd12 00:07:45.027 /dev/nbd13 00:07:45.027 /dev/nbd14' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.027 /dev/nbd1 00:07:45.027 /dev/nbd10 00:07:45.027 /dev/nbd11 00:07:45.027 /dev/nbd12 00:07:45.027 /dev/nbd13 00:07:45.027 /dev/nbd14' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:45.027 256+0 records in 00:07:45.027 256+0 records out 00:07:45.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120108 s, 87.3 MB/s 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.027 256+0 records in 00:07:45.027 256+0 records out 00:07:45.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0820369 s, 12.8 MB/s 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.027 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.285 256+0 records in 00:07:45.285 256+0 records out 00:07:45.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0738424 s, 14.2 MB/s 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:45.285 256+0 records in 00:07:45.285 256+0 records out 00:07:45.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760895 s, 13.8 MB/s 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:45.285 256+0 records in 00:07:45.285 256+0 records out 00:07:45.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11724 s, 8.9 MB/s 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.285 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:45.543 256+0 records in 00:07:45.543 256+0 records out 00:07:45.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0719813 s, 14.6 MB/s 00:07:45.543 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.543 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:45.543 256+0 records in 00:07:45.543 256+0 records out 00:07:45.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108612 s, 9.7 MB/s 00:07:45.543 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.543 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:45.802 256+0 records in 00:07:45.802 256+0 records out 00:07:45.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0945318 s, 11.1 MB/s 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.802 22:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.063 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.322 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.580 22:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.840 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.099 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.100 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:47.359 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:47.618 malloc_lvol_verify 00:07:47.618 22:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:47.877 76902956-3256-401b-84be-97af99f48c83 00:07:47.877 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:48.138 8df4160f-3c69-46da-920d-0ac6879e82ca 00:07:48.139 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:48.400 /dev/nbd0 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:48.400 mke2fs 1.47.0 (5-Feb-2023) 00:07:48.400 Discarding device blocks: 0/4096 done 00:07:48.400 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:48.400 00:07:48.400 Allocating group tables: 0/1 done 00:07:48.400 Writing inode tables: 0/1 done 00:07:48.400 Creating journal (1024 blocks): done 00:07:48.400 Writing superblocks and filesystem accounting information: 0/1 done 00:07:48.400 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.400 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75081 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 75081 ']' 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 75081 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75081 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.661 killing process with pid 75081 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75081' 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 75081 00:07:48.661 22:03:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 75081 00:07:51.972 22:03:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:51.972 00:07:51.972 real 0m13.216s 00:07:51.972 user 0m16.902s 00:07:51.972 sys 0m4.189s 00:07:51.972 22:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.972 ************************************ 00:07:51.972 END TEST bdev_nbd 00:07:51.972 ************************************ 00:07:51.972 22:03:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:52.244 skipping fio tests on NVMe due to multi-ns failures. 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:52.244 22:03:58 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.244 22:03:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:52.244 22:03:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.244 22:03:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.244 ************************************ 00:07:52.244 START TEST bdev_verify 00:07:52.244 ************************************ 00:07:52.244 22:03:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.244 [2024-12-16 22:03:58.419378] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:52.244 [2024-12-16 22:03:58.419499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75487 ] 00:07:52.244 [2024-12-16 22:03:58.578749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.502 [2024-12-16 22:03:58.599774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.502 [2024-12-16 22:03:58.599804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.760 Running I/O for 5 seconds... 00:07:55.067 20096.00 IOPS, 78.50 MiB/s [2024-12-16T22:04:02.348Z] 22880.00 IOPS, 89.38 MiB/s [2024-12-16T22:04:03.304Z] 22954.67 IOPS, 89.67 MiB/s [2024-12-16T22:04:04.237Z] 21888.00 IOPS, 85.50 MiB/s [2024-12-16T22:04:04.237Z] 21491.20 IOPS, 83.95 MiB/s 00:07:57.890 Latency(us) 00:07:57.890 [2024-12-16T22:04:04.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0xbd0bd 00:07:57.890 Nvme0n1 : 5.07 1477.68 5.77 0.00 0.00 86214.21 9275.86 92758.65 00:07:57.890 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:57.890 Nvme0n1 : 5.07 1540.22 6.02 0.00 0.00 82797.78 13510.50 100018.02 00:07:57.890 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x4ff80 00:07:57.890 Nvme1n1p1 : 5.09 1484.79 5.80 0.00 0.00 85855.97 13006.38 82272.89 00:07:57.890 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:57.890 Nvme1n1p1 : 5.07 1539.78 6.01 0.00 0.00 82535.50 15526.99 84692.68 00:07:57.890 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x4ff7f 00:07:57.890 Nvme1n1p2 : 5.07 1476.00 5.77 0.00 0.00 86018.55 12300.60 82676.18 00:07:57.890 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:57.890 Nvme1n1p2 : 5.07 1539.30 6.01 0.00 0.00 82354.26 17039.36 79046.50 00:07:57.890 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x80000 00:07:57.890 Nvme2n1 : 5.09 1483.89 5.80 0.00 0.00 85580.93 12754.31 72593.72 00:07:57.890 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x80000 length 0x80000 00:07:57.890 Nvme2n1 : 5.07 1538.81 6.01 0.00 0.00 82171.42 17745.13 76223.41 00:07:57.890 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x80000 00:07:57.890 Nvme2n2 : 5.09 1483.10 5.79 0.00 0.00 85424.20 14317.10 68560.74 00:07:57.890 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x80000 length 0x80000 00:07:57.890 Nvme2n2 : 5.09 1546.66 6.04 0.00 0.00 81625.78 2898.71 73400.32 00:07:57.890 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x80000 00:07:57.890 Nvme2n3 : 5.09 1482.68 5.79 0.00 0.00 85251.44 14216.27 73400.32 00:07:57.890 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x80000 length 0x80000 00:07:57.890 Nvme2n3 : 5.10 1555.26 6.08 0.00 0.00 81092.76 9326.28 72997.02 00:07:57.890 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x0 length 0x20000 00:07:57.890 Nvme3n1 : 5.09 1482.30 5.79 0.00 0.00 85091.83 14216.27 73803.62 00:07:57.890 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.890 Verification LBA range: start 0x20000 length 0x20000 00:07:57.890 Nvme3n1 : 5.10 1554.85 6.07 0.00 0.00 81007.03 9477.51 77433.30 00:07:57.890 [2024-12-16T22:04:04.237Z] =================================================================================================================== 00:07:57.891 [2024-12-16T22:04:04.238Z] Total : 21185.32 82.76 0.00 0.00 83745.98 2898.71 100018.02 00:07:58.456 00:07:58.456 real 0m6.386s 00:07:58.456 user 0m11.779s 00:07:58.456 sys 0m0.199s 00:07:58.456 22:04:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.456 ************************************ 00:07:58.456 END TEST bdev_verify 00:07:58.456 ************************************ 00:07:58.456 22:04:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:58.457 22:04:04 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.457 22:04:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:58.457 22:04:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.457 22:04:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.717 ************************************ 00:07:58.717 START TEST bdev_verify_big_io 00:07:58.717 ************************************ 00:07:58.717 22:04:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.717 [2024-12-16 22:04:04.893892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:58.717 [2024-12-16 22:04:04.894076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75575 ] 00:07:58.717 [2024-12-16 22:04:05.064325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.977 [2024-12-16 22:04:05.095281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.977 [2024-12-16 22:04:05.095331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.238 Running I/O for 5 seconds... 00:08:05.121 1024.00 IOPS, 64.00 MiB/s [2024-12-16T22:04:11.726Z] 2680.00 IOPS, 167.50 MiB/s [2024-12-16T22:04:11.726Z] 3208.67 IOPS, 200.54 MiB/s 00:08:05.379 Latency(us) 00:08:05.379 [2024-12-16T22:04:11.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.379 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0xbd0b 00:08:05.379 Nvme0n1 : 5.63 113.65 7.10 0.00 0.00 1084033.89 22988.01 1180857.90 00:08:05.379 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:05.379 Nvme0n1 : 5.75 102.88 6.43 0.00 0.00 1187380.50 30852.33 1529307.77 00:08:05.379 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x4ff8 00:08:05.379 Nvme1n1p1 : 5.75 115.75 7.23 0.00 0.00 1031594.77 96791.63 1019538.51 00:08:05.379 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:05.379 Nvme1n1p1 : 5.76 116.72 7.29 0.00 0.00 1013258.88 92758.65 1077613.49 00:08:05.379 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x4ff7 00:08:05.379 Nvme1n1p2 : 5.86 120.20 7.51 0.00 0.00 969202.39 99211.42 948557.98 00:08:05.379 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:05.379 Nvme1n1p2 : 5.76 122.24 7.64 0.00 0.00 955771.52 72593.72 1013085.74 00:08:05.379 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x8000 00:08:05.379 Nvme2n1 : 5.96 122.57 7.66 0.00 0.00 921087.28 61301.37 1064707.94 00:08:05.379 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x8000 length 0x8000 00:08:05.379 Nvme2n1 : 5.88 126.26 7.89 0.00 0.00 896329.20 49807.36 1051802.39 00:08:05.379 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x8000 00:08:05.379 Nvme2n2 : 5.98 120.63 7.54 0.00 0.00 912394.11 36700.16 2039077.02 00:08:05.379 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x8000 length 0x8000 00:08:05.379 Nvme2n2 : 5.88 130.65 8.17 0.00 0.00 847095.60 65334.35 1077613.49 00:08:05.379 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x8000 00:08:05.379 Nvme2n3 : 6.03 130.02 8.13 0.00 0.00 822851.55 31457.28 2064888.12 00:08:05.379 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x8000 length 0x8000 00:08:05.379 Nvme2n3 : 6.02 144.74 9.05 0.00 0.00 741861.51 19862.45 1103424.59 00:08:05.379 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x0 length 0x2000 00:08:05.379 Nvme3n1 : 6.10 148.16 9.26 0.00 0.00 704060.60 797.14 1884210.41 00:08:05.379 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:05.379 Verification LBA range: start 0x2000 length 0x2000 00:08:05.379 Nvme3n1 : 6.10 164.01 10.25 0.00 0.00 640429.51 686.87 1135688.47 00:08:05.379 [2024-12-16T22:04:11.727Z] =================================================================================================================== 00:08:05.380 [2024-12-16T22:04:11.727Z] Total : 1778.48 111.15 0.00 0.00 889097.95 686.87 2064888.12 00:08:06.310 00:08:06.310 real 0m7.704s 00:08:06.310 user 0m14.549s 00:08:06.310 sys 0m0.305s 00:08:06.310 22:04:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.310 ************************************ 00:08:06.310 END TEST bdev_verify_big_io 00:08:06.310 ************************************ 00:08:06.310 22:04:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:06.310 22:04:12 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.310 22:04:12 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:06.310 22:04:12 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.310 22:04:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.310 ************************************ 00:08:06.310 START TEST bdev_write_zeroes 00:08:06.310 ************************************ 00:08:06.310 22:04:12 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:06.310 [2024-12-16 22:04:12.628954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:06.310 [2024-12-16 22:04:12.629045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75673 ] 00:08:06.567 [2024-12-16 22:04:12.769408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.567 [2024-12-16 22:04:12.786037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.132 Running I/O for 1 seconds... 00:08:08.071 67200.00 IOPS, 262.50 MiB/s 00:08:08.071 Latency(us) 00:08:08.071 [2024-12-16T22:04:14.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.071 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme0n1 : 1.02 9566.61 37.37 0.00 0.00 13350.24 6956.90 27021.00 00:08:08.071 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme1n1p1 : 1.02 9554.95 37.32 0.00 0.00 13348.40 9527.93 27021.00 00:08:08.071 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme1n1p2 : 1.03 9543.34 37.28 0.00 0.00 13291.08 9427.10 24097.08 00:08:08.071 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme2n1 : 1.03 9532.67 37.24 0.00 0.00 13284.62 9729.58 23290.49 00:08:08.071 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme2n2 : 1.03 9521.98 37.20 0.00 0.00 13264.24 9679.16 22685.54 00:08:08.071 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme2n3 : 1.03 9511.36 37.15 0.00 0.00 13254.94 9880.81 23693.78 00:08:08.071 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:08.071 Nvme3n1 : 1.03 9500.75 37.11 0.00 0.00 13244.34 9779.99 25407.80 00:08:08.071 [2024-12-16T22:04:14.418Z] =================================================================================================================== 00:08:08.071 [2024-12-16T22:04:14.418Z] Total : 66731.66 260.67 0.00 0.00 13291.13 6956.90 27021.00 00:08:08.071 00:08:08.071 real 0m1.804s 00:08:08.071 user 0m1.537s 00:08:08.071 sys 0m0.158s 00:08:08.071 22:04:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.071 ************************************ 00:08:08.071 END TEST bdev_write_zeroes 00:08:08.071 ************************************ 00:08:08.071 22:04:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:08.332 22:04:14 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.332 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:08.332 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.332 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.332 ************************************ 00:08:08.332 START TEST bdev_json_nonenclosed 00:08:08.332 ************************************ 00:08:08.332 22:04:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.332 [2024-12-16 22:04:14.501923] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:08.332 [2024-12-16 22:04:14.502033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75710 ] 00:08:08.332 [2024-12-16 22:04:14.654596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.332 [2024-12-16 22:04:14.674360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.332 [2024-12-16 22:04:14.674439] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:08.332 [2024-12-16 22:04:14.674457] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.332 [2024-12-16 22:04:14.674470] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.590 00:08:08.591 real 0m0.297s 00:08:08.591 user 0m0.108s 00:08:08.591 sys 0m0.086s 00:08:08.591 22:04:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.591 ************************************ 00:08:08.591 END TEST bdev_json_nonenclosed 00:08:08.591 ************************************ 00:08:08.591 22:04:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:08.591 22:04:14 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.591 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:08.591 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.591 22:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.591 ************************************ 00:08:08.591 START TEST bdev_json_nonarray 00:08:08.591 ************************************ 00:08:08.591 22:04:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.591 [2024-12-16 22:04:14.860305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:08.591 [2024-12-16 22:04:14.860421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75735 ] 00:08:08.851 [2024-12-16 22:04:15.017361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.851 [2024-12-16 22:04:15.036321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.851 [2024-12-16 22:04:15.036413] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:08.851 [2024-12-16 22:04:15.036429] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.851 [2024-12-16 22:04:15.036440] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.851 00:08:08.851 real 0m0.295s 00:08:08.851 user 0m0.106s 00:08:08.851 sys 0m0.086s 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.851 ************************************ 00:08:08.851 END TEST bdev_json_nonarray 00:08:08.851 ************************************ 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 22:04:15 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:08.851 22:04:15 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:08.851 22:04:15 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:08.851 22:04:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.851 22:04:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.851 22:04:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.851 ************************************ 00:08:08.851 START TEST bdev_gpt_uuid 00:08:08.851 ************************************ 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75755 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75755 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75755 ']' 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:08.851 22:04:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 [2024-12-16 22:04:15.230377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:09.113 [2024-12-16 22:04:15.230492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75755 ] 00:08:09.113 [2024-12-16 22:04:15.379920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.113 [2024-12-16 22:04:15.399000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.055 Some configs were skipped because the RPC state that can call them passed over. 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.055 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:10.316 { 00:08:10.316 "name": "Nvme1n1p1", 00:08:10.316 "aliases": [ 00:08:10.316 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:10.316 ], 00:08:10.316 "product_name": "GPT Disk", 00:08:10.316 "block_size": 4096, 00:08:10.316 "num_blocks": 655104, 00:08:10.316 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:10.316 "assigned_rate_limits": { 00:08:10.316 "rw_ios_per_sec": 0, 00:08:10.316 "rw_mbytes_per_sec": 0, 00:08:10.316 "r_mbytes_per_sec": 0, 00:08:10.316 "w_mbytes_per_sec": 0 00:08:10.316 }, 00:08:10.316 "claimed": false, 00:08:10.316 "zoned": false, 00:08:10.316 "supported_io_types": { 00:08:10.316 "read": true, 00:08:10.316 "write": true, 00:08:10.316 "unmap": true, 00:08:10.316 "flush": true, 00:08:10.316 "reset": true, 00:08:10.316 "nvme_admin": false, 00:08:10.316 "nvme_io": false, 00:08:10.316 "nvme_io_md": false, 00:08:10.316 "write_zeroes": true, 00:08:10.316 "zcopy": false, 00:08:10.316 "get_zone_info": false, 00:08:10.316 "zone_management": false, 00:08:10.316 "zone_append": false, 00:08:10.316 "compare": true, 00:08:10.316 "compare_and_write": false, 00:08:10.316 "abort": true, 00:08:10.316 "seek_hole": false, 00:08:10.316 "seek_data": false, 00:08:10.316 "copy": true, 00:08:10.316 "nvme_iov_md": false 00:08:10.316 }, 00:08:10.316 "driver_specific": { 00:08:10.316 "gpt": { 00:08:10.316 "base_bdev": "Nvme1n1", 00:08:10.316 "offset_blocks": 256, 00:08:10.316 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:10.316 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:10.316 "partition_name": "SPDK_TEST_first" 00:08:10.316 } 00:08:10.316 } 00:08:10.316 } 00:08:10.316 ]' 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.316 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:10.316 { 00:08:10.316 "name": "Nvme1n1p2", 00:08:10.316 "aliases": [ 00:08:10.316 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:10.316 ], 00:08:10.317 "product_name": "GPT Disk", 00:08:10.317 "block_size": 4096, 00:08:10.317 "num_blocks": 655103, 00:08:10.317 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.317 "assigned_rate_limits": { 00:08:10.317 "rw_ios_per_sec": 0, 00:08:10.317 "rw_mbytes_per_sec": 0, 00:08:10.317 "r_mbytes_per_sec": 0, 00:08:10.317 "w_mbytes_per_sec": 0 00:08:10.317 }, 00:08:10.317 "claimed": false, 00:08:10.317 "zoned": false, 00:08:10.317 "supported_io_types": { 00:08:10.317 "read": true, 00:08:10.317 "write": true, 00:08:10.317 "unmap": true, 00:08:10.317 "flush": true, 00:08:10.317 "reset": true, 00:08:10.317 "nvme_admin": false, 00:08:10.317 "nvme_io": false, 00:08:10.317 "nvme_io_md": false, 00:08:10.317 "write_zeroes": true, 00:08:10.317 "zcopy": false, 00:08:10.317 "get_zone_info": false, 00:08:10.317 "zone_management": false, 00:08:10.317 "zone_append": false, 00:08:10.317 "compare": true, 00:08:10.317 "compare_and_write": false, 00:08:10.317 "abort": true, 00:08:10.317 "seek_hole": false, 00:08:10.317 "seek_data": false, 00:08:10.317 "copy": true, 00:08:10.317 "nvme_iov_md": false 00:08:10.317 }, 00:08:10.317 "driver_specific": { 00:08:10.317 "gpt": { 00:08:10.317 "base_bdev": "Nvme1n1", 00:08:10.317 "offset_blocks": 655360, 00:08:10.317 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:10.317 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:10.317 "partition_name": "SPDK_TEST_second" 00:08:10.317 } 00:08:10.317 } 00:08:10.317 } 00:08:10.317 ]' 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 75755 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75755 ']' 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75755 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.317 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75755 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.579 killing process with pid 75755 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75755' 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75755 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75755 00:08:10.579 00:08:10.579 real 0m1.765s 00:08:10.579 user 0m1.963s 00:08:10.579 sys 0m0.325s 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.579 ************************************ 00:08:10.579 END TEST bdev_gpt_uuid 00:08:10.579 ************************************ 00:08:10.579 22:04:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:10.841 22:04:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.102 Waiting for block devices as requested 00:08:11.364 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.364 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.364 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.689 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:16.689 22:04:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:16.689 22:04:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:16.689 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.689 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:16.689 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:16.689 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:16.689 22:04:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:16.689 00:08:16.689 real 0m50.728s 00:08:16.689 user 1m2.606s 00:08:16.689 sys 0m8.089s 00:08:16.689 22:04:23 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.689 ************************************ 00:08:16.689 END TEST blockdev_nvme_gpt 00:08:16.689 ************************************ 00:08:16.689 22:04:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.950 22:04:23 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:16.950 22:04:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.950 22:04:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.950 22:04:23 -- common/autotest_common.sh@10 -- # set +x 00:08:16.950 ************************************ 00:08:16.950 START TEST nvme 00:08:16.950 ************************************ 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:16.950 * Looking for test storage... 00:08:16.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.950 22:04:23 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.950 22:04:23 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.950 22:04:23 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.950 22:04:23 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.950 22:04:23 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.950 22:04:23 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:16.950 22:04:23 nvme -- scripts/common.sh@345 -- # : 1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.950 22:04:23 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.950 22:04:23 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@353 -- # local d=1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.950 22:04:23 nvme -- scripts/common.sh@355 -- # echo 1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.950 22:04:23 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@353 -- # local d=2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.950 22:04:23 nvme -- scripts/common.sh@355 -- # echo 2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.950 22:04:23 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.950 22:04:23 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.950 22:04:23 nvme -- scripts/common.sh@368 -- # return 0 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.950 --rc genhtml_branch_coverage=1 00:08:16.950 --rc genhtml_function_coverage=1 00:08:16.950 --rc genhtml_legend=1 00:08:16.950 --rc geninfo_all_blocks=1 00:08:16.950 --rc geninfo_unexecuted_blocks=1 00:08:16.950 00:08:16.950 ' 00:08:16.950 22:04:23 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.950 --rc genhtml_branch_coverage=1 00:08:16.950 --rc genhtml_function_coverage=1 00:08:16.950 --rc genhtml_legend=1 00:08:16.950 --rc geninfo_all_blocks=1 00:08:16.950 --rc geninfo_unexecuted_blocks=1 00:08:16.950 00:08:16.950 ' 00:08:16.951 22:04:23 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.951 --rc genhtml_branch_coverage=1 00:08:16.951 --rc genhtml_function_coverage=1 00:08:16.951 --rc genhtml_legend=1 00:08:16.951 --rc geninfo_all_blocks=1 00:08:16.951 --rc geninfo_unexecuted_blocks=1 00:08:16.951 00:08:16.951 ' 00:08:16.951 22:04:23 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.951 --rc genhtml_branch_coverage=1 00:08:16.951 --rc genhtml_function_coverage=1 00:08:16.951 --rc genhtml_legend=1 00:08:16.951 --rc geninfo_all_blocks=1 00:08:16.951 --rc geninfo_unexecuted_blocks=1 00:08:16.951 00:08:16.951 ' 00:08:16.951 22:04:23 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:17.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:18.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.095 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.095 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.095 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.095 22:04:24 nvme -- nvme/nvme.sh@79 -- # uname 00:08:18.095 22:04:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:18.095 22:04:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:18.095 22:04:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:18.095 Waiting for stub to ready for secondary processes... 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1075 -- # stubpid=76379 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76379 ]] 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:18.095 22:04:24 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:18.095 [2024-12-16 22:04:24.406631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:18.095 [2024-12-16 22:04:24.406781] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:19.039 22:04:25 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:19.039 22:04:25 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76379 ]] 00:08:19.039 22:04:25 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:19.039 [2024-12-16 22:04:25.381560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.298 [2024-12-16 22:04:25.399016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.298 [2024-12-16 22:04:25.399324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.298 [2024-12-16 22:04:25.399378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.298 [2024-12-16 22:04:25.414343] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:19.298 [2024-12-16 22:04:25.414392] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:19.298 [2024-12-16 22:04:25.429710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:19.298 [2024-12-16 22:04:25.429922] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:19.298 [2024-12-16 22:04:25.432546] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:19.298 [2024-12-16 22:04:25.433037] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:19.298 [2024-12-16 22:04:25.433140] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:19.298 [2024-12-16 22:04:25.435981] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:19.298 [2024-12-16 22:04:25.436311] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:19.298 [2024-12-16 22:04:25.436414] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:19.298 [2024-12-16 22:04:25.438331] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:19.298 [2024-12-16 22:04:25.438681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:19.299 [2024-12-16 22:04:25.438794] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:19.299 [2024-12-16 22:04:25.438941] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:19.299 [2024-12-16 22:04:25.439064] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:20.241 done. 00:08:20.241 22:04:26 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:20.241 22:04:26 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:20.241 22:04:26 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:20.241 22:04:26 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:20.241 22:04:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.241 22:04:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.241 ************************************ 00:08:20.241 START TEST nvme_reset 00:08:20.241 ************************************ 00:08:20.241 22:04:26 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:20.502 Initializing NVMe Controllers 00:08:20.502 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:20.502 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:20.502 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:20.502 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:20.502 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:20.502 00:08:20.502 real 0m0.230s 00:08:20.502 user 0m0.073s 00:08:20.502 sys 0m0.103s 00:08:20.502 ************************************ 00:08:20.502 END TEST nvme_reset 00:08:20.502 ************************************ 00:08:20.502 22:04:26 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.502 22:04:26 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:20.502 22:04:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:20.502 22:04:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.502 22:04:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.502 22:04:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.502 ************************************ 00:08:20.502 START TEST nvme_identify 00:08:20.502 ************************************ 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:20.502 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:20.502 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:20.502 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:20.502 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:20.502 22:04:26 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:20.502 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:20.766 [2024-12-16 22:04:26.913817] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76413 terminated unexpected 00:08:20.766 ===================================================== 00:08:20.766 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:20.766 ===================================================== 00:08:20.766 Controller Capabilities/Features 00:08:20.766 ================================ 00:08:20.766 Vendor ID: 1b36 00:08:20.766 Subsystem Vendor ID: 1af4 00:08:20.766 Serial Number: 12340 00:08:20.766 Model Number: QEMU NVMe Ctrl 00:08:20.766 Firmware Version: 8.0.0 00:08:20.766 Recommended Arb Burst: 6 00:08:20.766 IEEE OUI Identifier: 00 54 52 00:08:20.766 Multi-path I/O 00:08:20.766 May have multiple subsystem ports: No 00:08:20.766 May have multiple controllers: No 00:08:20.766 Associated with SR-IOV VF: No 00:08:20.766 Max Data Transfer Size: 524288 00:08:20.766 Max Number of Namespaces: 256 00:08:20.766 Max Number of I/O Queues: 64 00:08:20.766 NVMe Specification Version (VS): 1.4 00:08:20.766 NVMe Specification Version (Identify): 1.4 00:08:20.766 Maximum Queue Entries: 2048 00:08:20.766 Contiguous Queues Required: Yes 00:08:20.766 Arbitration Mechanisms Supported 00:08:20.766 Weighted Round Robin: Not Supported 00:08:20.766 Vendor Specific: Not Supported 00:08:20.766 Reset Timeout: 7500 ms 00:08:20.766 Doorbell Stride: 4 bytes 00:08:20.766 NVM Subsystem Reset: Not Supported 00:08:20.766 Command Sets Supported 00:08:20.766 NVM Command Set: Supported 00:08:20.766 Boot Partition: Not Supported 00:08:20.766 Memory Page Size Minimum: 4096 bytes 00:08:20.766 Memory Page Size Maximum: 65536 bytes 00:08:20.766 Persistent Memory Region: Not Supported 00:08:20.766 Optional Asynchronous Events Supported 00:08:20.766 Namespace Attribute Notices: Supported 00:08:20.766 Firmware Activation Notices: Not Supported 00:08:20.766 ANA Change Notices: Not Supported 00:08:20.766 PLE Aggregate Log Change Notices: Not Supported 00:08:20.766 LBA Status Info Alert Notices: Not Supported 00:08:20.766 EGE Aggregate Log Change Notices: Not Supported 00:08:20.766 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.766 Zone Descriptor Change Notices: Not Supported 00:08:20.766 Discovery Log Change Notices: Not Supported 00:08:20.766 Controller Attributes 00:08:20.766 128-bit Host Identifier: Not Supported 00:08:20.766 Non-Operational Permissive Mode: Not Supported 00:08:20.766 NVM Sets: Not Supported 00:08:20.766 Read Recovery Levels: Not Supported 00:08:20.766 Endurance Groups: Not Supported 00:08:20.766 Predictable Latency Mode: Not Supported 00:08:20.766 Traffic Based Keep ALive: Not Supported 00:08:20.766 Namespace Granularity: Not Supported 00:08:20.766 SQ Associations: Not Supported 00:08:20.766 UUID List: Not Supported 00:08:20.766 Multi-Domain Subsystem: Not Supported 00:08:20.766 Fixed Capacity Management: Not Supported 00:08:20.766 Variable Capacity Management: Not Supported 00:08:20.766 Delete Endurance Group: Not Supported 00:08:20.766 Delete NVM Set: Not Supported 00:08:20.766 Extended LBA Formats Supported: Supported 00:08:20.766 Flexible Data Placement Supported: Not Supported 00:08:20.766 00:08:20.766 Controller Memory Buffer Support 00:08:20.766 ================================ 00:08:20.766 Supported: No 00:08:20.766 00:08:20.766 Persistent Memory Region Support 00:08:20.766 ================================ 00:08:20.766 Supported: No 00:08:20.766 00:08:20.766 Admin Command Set Attributes 00:08:20.766 ============================ 00:08:20.766 Security Send/Receive: Not Supported 00:08:20.766 Format NVM: Supported 00:08:20.766 Firmware Activate/Download: Not Supported 00:08:20.766 Namespace Management: Supported 00:08:20.766 Device Self-Test: Not Supported 00:08:20.766 Directives: Supported 00:08:20.766 NVMe-MI: Not Supported 00:08:20.766 Virtualization Management: Not Supported 00:08:20.766 Doorbell Buffer Config: Supported 00:08:20.766 Get LBA Status Capability: Not Supported 00:08:20.766 Command & Feature Lockdown Capability: Not Supported 00:08:20.766 Abort Command Limit: 4 00:08:20.766 Async Event Request Limit: 4 00:08:20.766 Number of Firmware Slots: N/A 00:08:20.766 Firmware Slot 1 Read-Only: N/A 00:08:20.766 Firmware Activation Without Reset: N/A 00:08:20.766 Multiple Update Detection Support: N/A 00:08:20.766 Firmware Update Granularity: No Information Provided 00:08:20.766 Per-Namespace SMART Log: Yes 00:08:20.766 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.766 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:20.766 Command Effects Log Page: Supported 00:08:20.766 Get Log Page Extended Data: Supported 00:08:20.766 Telemetry Log Pages: Not Supported 00:08:20.766 Persistent Event Log Pages: Not Supported 00:08:20.766 Supported Log Pages Log Page: May Support 00:08:20.766 Commands Supported & Effects Log Page: Not Supported 00:08:20.766 Feature Identifiers & Effects Log Page:May Support 00:08:20.766 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.766 Data Area 4 for Telemetry Log: Not Supported 00:08:20.766 Error Log Page Entries Supported: 1 00:08:20.766 Keep Alive: Not Supported 00:08:20.766 00:08:20.766 NVM Command Set Attributes 00:08:20.766 ========================== 00:08:20.766 Submission Queue Entry Size 00:08:20.766 Max: 64 00:08:20.766 Min: 64 00:08:20.766 Completion Queue Entry Size 00:08:20.766 Max: 16 00:08:20.766 Min: 16 00:08:20.766 Number of Namespaces: 256 00:08:20.766 Compare Command: Supported 00:08:20.766 Write Uncorrectable Command: Not Supported 00:08:20.766 Dataset Management Command: Supported 00:08:20.766 Write Zeroes Command: Supported 00:08:20.766 Set Features Save Field: Supported 00:08:20.766 Reservations: Not Supported 00:08:20.766 Timestamp: Supported 00:08:20.766 Copy: Supported 00:08:20.766 Volatile Write Cache: Present 00:08:20.766 Atomic Write Unit (Normal): 1 00:08:20.766 Atomic Write Unit (PFail): 1 00:08:20.766 Atomic Compare & Write Unit: 1 00:08:20.766 Fused Compare & Write: Not Supported 00:08:20.766 Scatter-Gather List 00:08:20.766 SGL Command Set: Supported 00:08:20.766 SGL Keyed: Not Supported 00:08:20.766 SGL Bit Bucket Descriptor: Not Supported 00:08:20.766 SGL Metadata Pointer: Not Supported 00:08:20.766 Oversized SGL: Not Supported 00:08:20.766 SGL Metadata Address: Not Supported 00:08:20.766 SGL Offset: Not Supported 00:08:20.766 Transport SGL Data Block: Not Supported 00:08:20.766 Replay Protected Memory Block: Not Supported 00:08:20.766 00:08:20.766 Firmware Slot Information 00:08:20.766 ========================= 00:08:20.766 Active slot: 1 00:08:20.766 Slot 1 Firmware Revision: 1.0 00:08:20.766 00:08:20.766 00:08:20.766 Commands Supported and Effects 00:08:20.766 ============================== 00:08:20.766 Admin Commands 00:08:20.766 -------------- 00:08:20.766 Delete I/O Submission Queue (00h): Supported 00:08:20.766 Create I/O Submission Queue (01h): Supported 00:08:20.766 Get Log Page (02h): Supported 00:08:20.767 Delete I/O Completion Queue (04h): Supported 00:08:20.767 Create I/O Completion Queue (05h): Supported 00:08:20.767 Identify (06h): Supported 00:08:20.767 Abort (08h): Supported 00:08:20.767 Set Features (09h): Supported 00:08:20.767 Get Features (0Ah): Supported 00:08:20.767 Asynchronous Event Request (0Ch): Supported 00:08:20.767 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.767 Directive Send (19h): Supported 00:08:20.767 Directive Receive (1Ah): Supported 00:08:20.767 Virtualization Management (1Ch): Supported 00:08:20.767 Doorbell Buffer Config (7Ch): Supported 00:08:20.767 Format NVM (80h): Supported LBA-Change 00:08:20.767 I/O Commands 00:08:20.767 ------------ 00:08:20.767 Flush (00h): Supported LBA-Change 00:08:20.767 Write (01h): Supported LBA-Change 00:08:20.767 Read (02h): Supported 00:08:20.767 Compare (05h): Supported 00:08:20.767 Write Zeroes (08h): Supported LBA-Change 00:08:20.767 Dataset Management (09h): Supported LBA-Change 00:08:20.767 Unknown (0Ch): Supported 00:08:20.767 Unknown (12h): Supported 00:08:20.767 Copy (19h): Supported LBA-Change 00:08:20.767 Unknown (1Dh): Supported LBA-Change 00:08:20.767 00:08:20.767 Error Log 00:08:20.767 ========= 00:08:20.767 00:08:20.767 Arbitration 00:08:20.767 =========== 00:08:20.767 Arbitration Burst: no limit 00:08:20.767 00:08:20.767 Power Management 00:08:20.767 ================ 00:08:20.767 Number of Power States: 1 00:08:20.767 Current Power State: Power State #0 00:08:20.767 Power State #0: 00:08:20.767 Max Power: 25.00 W 00:08:20.767 Non-Operational State: Operational 00:08:20.767 Entry Latency: 16 microseconds 00:08:20.767 Exit Latency: 4 microseconds 00:08:20.767 Relative Read Throughput: 0 00:08:20.767 Relative Read Latency: 0 00:08:20.767 Relative Write Throughput: 0 00:08:20.767 Relative Write Latency: 0 00:08:20.767 Idle Power[2024-12-16 22:04:26.914941] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76413 terminated unexpected 00:08:20.767 : Not Reported 00:08:20.767 Active Power: Not Reported 00:08:20.767 Non-Operational Permissive Mode: Not Supported 00:08:20.767 00:08:20.767 Health Information 00:08:20.767 ================== 00:08:20.767 Critical Warnings: 00:08:20.767 Available Spare Space: OK 00:08:20.767 Temperature: OK 00:08:20.767 Device Reliability: OK 00:08:20.767 Read Only: No 00:08:20.767 Volatile Memory Backup: OK 00:08:20.767 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.767 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.767 Available Spare: 0% 00:08:20.767 Available Spare Threshold: 0% 00:08:20.767 Life Percentage Used: 0% 00:08:20.767 Data Units Read: 708 00:08:20.767 Data Units Written: 636 00:08:20.767 Host Read Commands: 38852 00:08:20.767 Host Write Commands: 38638 00:08:20.767 Controller Busy Time: 0 minutes 00:08:20.767 Power Cycles: 0 00:08:20.767 Power On Hours: 0 hours 00:08:20.767 Unsafe Shutdowns: 0 00:08:20.767 Unrecoverable Media Errors: 0 00:08:20.767 Lifetime Error Log Entries: 0 00:08:20.767 Warning Temperature Time: 0 minutes 00:08:20.767 Critical Temperature Time: 0 minutes 00:08:20.767 00:08:20.767 Number of Queues 00:08:20.767 ================ 00:08:20.767 Number of I/O Submission Queues: 64 00:08:20.767 Number of I/O Completion Queues: 64 00:08:20.767 00:08:20.767 ZNS Specific Controller Data 00:08:20.767 ============================ 00:08:20.767 Zone Append Size Limit: 0 00:08:20.767 00:08:20.767 00:08:20.767 Active Namespaces 00:08:20.767 ================= 00:08:20.767 Namespace ID:1 00:08:20.767 Error Recovery Timeout: Unlimited 00:08:20.767 Command Set Identifier: NVM (00h) 00:08:20.767 Deallocate: Supported 00:08:20.767 Deallocated/Unwritten Error: Supported 00:08:20.767 Deallocated Read Value: All 0x00 00:08:20.767 Deallocate in Write Zeroes: Not Supported 00:08:20.767 Deallocated Guard Field: 0xFFFF 00:08:20.767 Flush: Supported 00:08:20.767 Reservation: Not Supported 00:08:20.767 Metadata Transferred as: Separate Metadata Buffer 00:08:20.767 Namespace Sharing Capabilities: Private 00:08:20.767 Size (in LBAs): 1548666 (5GiB) 00:08:20.767 Capacity (in LBAs): 1548666 (5GiB) 00:08:20.767 Utilization (in LBAs): 1548666 (5GiB) 00:08:20.767 Thin Provisioning: Not Supported 00:08:20.767 Per-NS Atomic Units: No 00:08:20.767 Maximum Single Source Range Length: 128 00:08:20.767 Maximum Copy Length: 128 00:08:20.767 Maximum Source Range Count: 128 00:08:20.767 NGUID/EUI64 Never Reused: No 00:08:20.767 Namespace Write Protected: No 00:08:20.767 Number of LBA Formats: 8 00:08:20.767 Current LBA Format: LBA Format #07 00:08:20.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.767 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.767 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.767 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.767 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.767 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.767 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.767 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.767 00:08:20.767 NVM Specific Namespace Data 00:08:20.767 =========================== 00:08:20.767 Logical Block Storage Tag Mask: 0 00:08:20.767 Protection Information Capabilities: 00:08:20.767 16b Guard Protection Information Storage Tag Support: No 00:08:20.767 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.767 Storage Tag Check Read Support: No 00:08:20.767 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.767 ===================================================== 00:08:20.767 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:20.767 ===================================================== 00:08:20.767 Controller Capabilities/Features 00:08:20.767 ================================ 00:08:20.767 Vendor ID: 1b36 00:08:20.767 Subsystem Vendor ID: 1af4 00:08:20.767 Serial Number: 12341 00:08:20.767 Model Number: QEMU NVMe Ctrl 00:08:20.767 Firmware Version: 8.0.0 00:08:20.767 Recommended Arb Burst: 6 00:08:20.767 IEEE OUI Identifier: 00 54 52 00:08:20.767 Multi-path I/O 00:08:20.767 May have multiple subsystem ports: No 00:08:20.767 May have multiple controllers: No 00:08:20.767 Associated with SR-IOV VF: No 00:08:20.767 Max Data Transfer Size: 524288 00:08:20.767 Max Number of Namespaces: 256 00:08:20.767 Max Number of I/O Queues: 64 00:08:20.767 NVMe Specification Version (VS): 1.4 00:08:20.767 NVMe Specification Version (Identify): 1.4 00:08:20.767 Maximum Queue Entries: 2048 00:08:20.767 Contiguous Queues Required: Yes 00:08:20.767 Arbitration Mechanisms Supported 00:08:20.767 Weighted Round Robin: Not Supported 00:08:20.767 Vendor Specific: Not Supported 00:08:20.767 Reset Timeout: 7500 ms 00:08:20.767 Doorbell Stride: 4 bytes 00:08:20.767 NVM Subsystem Reset: Not Supported 00:08:20.767 Command Sets Supported 00:08:20.767 NVM Command Set: Supported 00:08:20.767 Boot Partition: Not Supported 00:08:20.767 Memory Page Size Minimum: 4096 bytes 00:08:20.767 Memory Page Size Maximum: 65536 bytes 00:08:20.767 Persistent Memory Region: Not Supported 00:08:20.767 Optional Asynchronous Events Supported 00:08:20.767 Namespace Attribute Notices: Supported 00:08:20.767 Firmware Activation Notices: Not Supported 00:08:20.767 ANA Change Notices: Not Supported 00:08:20.767 PLE Aggregate Log Change Notices: Not Supported 00:08:20.767 LBA Status Info Alert Notices: Not Supported 00:08:20.767 EGE Aggregate Log Change Notices: Not Supported 00:08:20.767 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.767 Zone Descriptor Change Notices: Not Supported 00:08:20.767 Discovery Log Change Notices: Not Supported 00:08:20.767 Controller Attributes 00:08:20.768 128-bit Host Identifier: Not Supported 00:08:20.768 Non-Operational Permissive Mode: Not Supported 00:08:20.768 NVM Sets: Not Supported 00:08:20.768 Read Recovery Levels: Not Supported 00:08:20.768 Endurance Groups: Not Supported 00:08:20.768 Predictable Latency Mode: Not Supported 00:08:20.768 Traffic Based Keep ALive: Not Supported 00:08:20.768 Namespace Granularity: Not Supported 00:08:20.768 SQ Associations: Not Supported 00:08:20.768 UUID List: Not Supported 00:08:20.768 Multi-Domain Subsystem: Not Supported 00:08:20.768 Fixed Capacity Management: Not Supported 00:08:20.768 Variable Capacity Management: Not Supported 00:08:20.768 Delete Endurance Group: Not Supported 00:08:20.768 Delete NVM Set: Not Supported 00:08:20.768 Extended LBA Formats Supported: Supported 00:08:20.768 Flexible Data Placement Supported: Not Supported 00:08:20.768 00:08:20.768 Controller Memory Buffer Support 00:08:20.768 ================================ 00:08:20.768 Supported: No 00:08:20.768 00:08:20.768 Persistent Memory Region Support 00:08:20.768 ================================ 00:08:20.768 Supported: No 00:08:20.768 00:08:20.768 Admin Command Set Attributes 00:08:20.768 ============================ 00:08:20.768 Security Send/Receive: Not Supported 00:08:20.768 Format NVM: Supported 00:08:20.768 Firmware Activate/Download: Not Supported 00:08:20.768 Namespace Management: Supported 00:08:20.768 Device Self-Test: Not Supported 00:08:20.768 Directives: Supported 00:08:20.768 NVMe-MI: Not Supported 00:08:20.768 Virtualization Management: Not Supported 00:08:20.768 Doorbell Buffer Config: Supported 00:08:20.768 Get LBA Status Capability: Not Supported 00:08:20.768 Command & Feature Lockdown Capability: Not Supported 00:08:20.768 Abort Command Limit: 4 00:08:20.768 Async Event Request Limit: 4 00:08:20.768 Number of Firmware Slots: N/A 00:08:20.768 Firmware Slot 1 Read-Only: N/A 00:08:20.768 Firmware Activation Without Reset: N/A 00:08:20.768 Multiple Update Detection Support: N/A 00:08:20.768 Firmware Update Granularity: No Information Provided 00:08:20.768 Per-Namespace SMART Log: Yes 00:08:20.768 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.768 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:20.768 Command Effects Log Page: Supported 00:08:20.768 Get Log Page Extended Data: Supported 00:08:20.768 Telemetry Log Pages: Not Supported 00:08:20.768 Persistent Event Log Pages: Not Supported 00:08:20.768 Supported Log Pages Log Page: May Support 00:08:20.768 Commands Supported & Effects Log Page: Not Supported 00:08:20.768 Feature Identifiers & Effects Log Page:May Support 00:08:20.768 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.768 Data Area 4 for Telemetry Log: Not Supported 00:08:20.768 Error Log Page Entries Supported: 1 00:08:20.768 Keep Alive: Not Supported 00:08:20.768 00:08:20.768 NVM Command Set Attributes 00:08:20.768 ========================== 00:08:20.768 Submission Queue Entry Size 00:08:20.768 Max: 64 00:08:20.768 Min: 64 00:08:20.768 Completion Queue Entry Size 00:08:20.768 Max: 16 00:08:20.768 Min: 16 00:08:20.768 Number of Namespaces: 256 00:08:20.768 Compare Command: Supported 00:08:20.768 Write Uncorrectable Command: Not Supported 00:08:20.768 Dataset Management Command: Supported 00:08:20.768 Write Zeroes Command: Supported 00:08:20.768 Set Features Save Field: Supported 00:08:20.768 Reservations: Not Supported 00:08:20.768 Timestamp: Supported 00:08:20.768 Copy: Supported 00:08:20.768 Volatile Write Cache: Present 00:08:20.768 Atomic Write Unit (Normal): 1 00:08:20.768 Atomic Write Unit (PFail): 1 00:08:20.768 Atomic Compare & Write Unit: 1 00:08:20.768 Fused Compare & Write: Not Supported 00:08:20.768 Scatter-Gather List 00:08:20.768 SGL Command Set: Supported 00:08:20.768 SGL Keyed: Not Supported 00:08:20.768 SGL Bit Bucket Descriptor: Not Supported 00:08:20.768 SGL Metadata Pointer: Not Supported 00:08:20.768 Oversized SGL: Not Supported 00:08:20.768 SGL Metadata Address: Not Supported 00:08:20.768 SGL Offset: Not Supported 00:08:20.768 Transport SGL Data Block: Not Supported 00:08:20.768 Replay Protected Memory Block: Not Supported 00:08:20.768 00:08:20.768 Firmware Slot Information 00:08:20.768 ========================= 00:08:20.768 Active slot: 1 00:08:20.768 Slot 1 Firmware Revision: 1.0 00:08:20.768 00:08:20.768 00:08:20.768 Commands Supported and Effects 00:08:20.768 ============================== 00:08:20.768 Admin Commands 00:08:20.768 -------------- 00:08:20.768 Delete I/O Submission Queue (00h): Supported 00:08:20.768 Create I/O Submission Queue (01h): Supported 00:08:20.768 Get Log Page (02h): Supported 00:08:20.768 Delete I/O Completion Queue (04h): Supported 00:08:20.768 Create I/O Completion Queue (05h): Supported 00:08:20.768 Identify (06h): Supported 00:08:20.768 Abort (08h): Supported 00:08:20.768 Set Features (09h): Supported 00:08:20.768 Get Features (0Ah): Supported 00:08:20.768 Asynchronous Event Request (0Ch): Supported 00:08:20.768 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.768 Directive Send (19h): Supported 00:08:20.768 Directive Receive (1Ah): Supported 00:08:20.768 Virtualization Management (1Ch): Supported 00:08:20.768 Doorbell Buffer Config (7Ch): Supported 00:08:20.768 Format NVM (80h): Supported LBA-Change 00:08:20.768 I/O Commands 00:08:20.768 ------------ 00:08:20.768 Flush (00h): Supported LBA-Change 00:08:20.768 Write (01h): Supported LBA-Change 00:08:20.768 Read (02h): Supported 00:08:20.768 Compare (05h): Supported 00:08:20.768 Write Zeroes (08h): Supported LBA-Change 00:08:20.768 Dataset Management (09h): Supported LBA-Change 00:08:20.768 Unknown (0Ch): Supported 00:08:20.768 Unknown (12h): Supported 00:08:20.768 Copy (19h): Supported LBA-Change 00:08:20.768 Unknown (1Dh): Supported LBA-Change 00:08:20.768 00:08:20.768 Error Log 00:08:20.768 ========= 00:08:20.768 00:08:20.768 Arbitration 00:08:20.768 =========== 00:08:20.768 Arbitration Burst: no limit 00:08:20.768 00:08:20.768 Power Management 00:08:20.768 ================ 00:08:20.768 Number of Power States: 1 00:08:20.768 Current Power State: Power State #0 00:08:20.768 Power State #0: 00:08:20.768 Max Power: 25.00 W 00:08:20.768 Non-Operational State: Operational 00:08:20.768 Entry Latency: 16 microseconds 00:08:20.768 Exit Latency: 4 microseconds 00:08:20.768 Relative Read Throughput: 0 00:08:20.768 Relative Read Latency: 0 00:08:20.768 Relative Write Throughput: 0 00:08:20.768 Relative Write Latency: 0 00:08:20.768 Idle Power: Not Reported 00:08:20.768 Active Power: Not Reported 00:08:20.768 Non-Operational Permissive Mode: Not Supported 00:08:20.768 00:08:20.768 Health Information 00:08:20.768 ================== 00:08:20.768 Critical Warnings: 00:08:20.768 Available Spare Space: OK 00:08:20.768 Temperature: [2024-12-16 22:04:26.915698] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76413 terminated unexpected 00:08:20.768 OK 00:08:20.768 Device Reliability: OK 00:08:20.768 Read Only: No 00:08:20.768 Volatile Memory Backup: OK 00:08:20.768 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.768 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.768 Available Spare: 0% 00:08:20.768 Available Spare Threshold: 0% 00:08:20.768 Life Percentage Used: 0% 00:08:20.768 Data Units Read: 1091 00:08:20.768 Data Units Written: 964 00:08:20.768 Host Read Commands: 56889 00:08:20.768 Host Write Commands: 55784 00:08:20.768 Controller Busy Time: 0 minutes 00:08:20.768 Power Cycles: 0 00:08:20.768 Power On Hours: 0 hours 00:08:20.768 Unsafe Shutdowns: 0 00:08:20.768 Unrecoverable Media Errors: 0 00:08:20.768 Lifetime Error Log Entries: 0 00:08:20.768 Warning Temperature Time: 0 minutes 00:08:20.768 Critical Temperature Time: 0 minutes 00:08:20.768 00:08:20.768 Number of Queues 00:08:20.768 ================ 00:08:20.768 Number of I/O Submission Queues: 64 00:08:20.768 Number of I/O Completion Queues: 64 00:08:20.768 00:08:20.768 ZNS Specific Controller Data 00:08:20.768 ============================ 00:08:20.768 Zone Append Size Limit: 0 00:08:20.768 00:08:20.768 00:08:20.768 Active Namespaces 00:08:20.768 ================= 00:08:20.768 Namespace ID:1 00:08:20.768 Error Recovery Timeout: Unlimited 00:08:20.768 Command Set Identifier: NVM (00h) 00:08:20.768 Deallocate: Supported 00:08:20.769 Deallocated/Unwritten Error: Supported 00:08:20.769 Deallocated Read Value: All 0x00 00:08:20.769 Deallocate in Write Zeroes: Not Supported 00:08:20.769 Deallocated Guard Field: 0xFFFF 00:08:20.769 Flush: Supported 00:08:20.769 Reservation: Not Supported 00:08:20.769 Namespace Sharing Capabilities: Private 00:08:20.769 Size (in LBAs): 1310720 (5GiB) 00:08:20.769 Capacity (in LBAs): 1310720 (5GiB) 00:08:20.769 Utilization (in LBAs): 1310720 (5GiB) 00:08:20.769 Thin Provisioning: Not Supported 00:08:20.769 Per-NS Atomic Units: No 00:08:20.769 Maximum Single Source Range Length: 128 00:08:20.769 Maximum Copy Length: 128 00:08:20.769 Maximum Source Range Count: 128 00:08:20.769 NGUID/EUI64 Never Reused: No 00:08:20.769 Namespace Write Protected: No 00:08:20.769 Number of LBA Formats: 8 00:08:20.769 Current LBA Format: LBA Format #04 00:08:20.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.769 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.769 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.769 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.769 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.769 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.769 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.769 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.769 00:08:20.769 NVM Specific Namespace Data 00:08:20.769 =========================== 00:08:20.769 Logical Block Storage Tag Mask: 0 00:08:20.769 Protection Information Capabilities: 00:08:20.769 16b Guard Protection Information Storage Tag Support: No 00:08:20.769 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.769 Storage Tag Check Read Support: No 00:08:20.769 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.769 ===================================================== 00:08:20.769 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:20.769 ===================================================== 00:08:20.769 Controller Capabilities/Features 00:08:20.769 ================================ 00:08:20.769 Vendor ID: 1b36 00:08:20.769 Subsystem Vendor ID: 1af4 00:08:20.769 Serial Number: 12343 00:08:20.769 Model Number: QEMU NVMe Ctrl 00:08:20.769 Firmware Version: 8.0.0 00:08:20.769 Recommended Arb Burst: 6 00:08:20.769 IEEE OUI Identifier: 00 54 52 00:08:20.769 Multi-path I/O 00:08:20.769 May have multiple subsystem ports: No 00:08:20.769 May have multiple controllers: Yes 00:08:20.769 Associated with SR-IOV VF: No 00:08:20.769 Max Data Transfer Size: 524288 00:08:20.769 Max Number of Namespaces: 256 00:08:20.769 Max Number of I/O Queues: 64 00:08:20.769 NVMe Specification Version (VS): 1.4 00:08:20.769 NVMe Specification Version (Identify): 1.4 00:08:20.769 Maximum Queue Entries: 2048 00:08:20.769 Contiguous Queues Required: Yes 00:08:20.769 Arbitration Mechanisms Supported 00:08:20.769 Weighted Round Robin: Not Supported 00:08:20.769 Vendor Specific: Not Supported 00:08:20.769 Reset Timeout: 7500 ms 00:08:20.769 Doorbell Stride: 4 bytes 00:08:20.769 NVM Subsystem Reset: Not Supported 00:08:20.769 Command Sets Supported 00:08:20.769 NVM Command Set: Supported 00:08:20.769 Boot Partition: Not Supported 00:08:20.769 Memory Page Size Minimum: 4096 bytes 00:08:20.769 Memory Page Size Maximum: 65536 bytes 00:08:20.769 Persistent Memory Region: Not Supported 00:08:20.769 Optional Asynchronous Events Supported 00:08:20.769 Namespace Attribute Notices: Supported 00:08:20.769 Firmware Activation Notices: Not Supported 00:08:20.769 ANA Change Notices: Not Supported 00:08:20.769 PLE Aggregate Log Change Notices: Not Supported 00:08:20.769 LBA Status Info Alert Notices: Not Supported 00:08:20.769 EGE Aggregate Log Change Notices: Not Supported 00:08:20.769 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.769 Zone Descriptor Change Notices: Not Supported 00:08:20.769 Discovery Log Change Notices: Not Supported 00:08:20.769 Controller Attributes 00:08:20.769 128-bit Host Identifier: Not Supported 00:08:20.769 Non-Operational Permissive Mode: Not Supported 00:08:20.769 NVM Sets: Not Supported 00:08:20.769 Read Recovery Levels: Not Supported 00:08:20.769 Endurance Groups: Supported 00:08:20.769 Predictable Latency Mode: Not Supported 00:08:20.769 Traffic Based Keep ALive: Not Supported 00:08:20.769 Namespace Granularity: Not Supported 00:08:20.769 SQ Associations: Not Supported 00:08:20.769 UUID List: Not Supported 00:08:20.769 Multi-Domain Subsystem: Not Supported 00:08:20.769 Fixed Capacity Management: Not Supported 00:08:20.769 Variable Capacity Management: Not Supported 00:08:20.769 Delete Endurance Group: Not Supported 00:08:20.769 Delete NVM Set: Not Supported 00:08:20.769 Extended LBA Formats Supported: Supported 00:08:20.769 Flexible Data Placement Supported: Supported 00:08:20.769 00:08:20.769 Controller Memory Buffer Support 00:08:20.769 ================================ 00:08:20.769 Supported: No 00:08:20.769 00:08:20.769 Persistent Memory Region Support 00:08:20.769 ================================ 00:08:20.769 Supported: No 00:08:20.769 00:08:20.769 Admin Command Set Attributes 00:08:20.769 ============================ 00:08:20.769 Security Send/Receive: Not Supported 00:08:20.769 Format NVM: Supported 00:08:20.769 Firmware Activate/Download: Not Supported 00:08:20.769 Namespace Management: Supported 00:08:20.769 Device Self-Test: Not Supported 00:08:20.769 Directives: Supported 00:08:20.769 NVMe-MI: Not Supported 00:08:20.769 Virtualization Management: Not Supported 00:08:20.769 Doorbell Buffer Config: Supported 00:08:20.769 Get LBA Status Capability: Not Supported 00:08:20.769 Command & Feature Lockdown Capability: Not Supported 00:08:20.769 Abort Command Limit: 4 00:08:20.769 Async Event Request Limit: 4 00:08:20.769 Number of Firmware Slots: N/A 00:08:20.769 Firmware Slot 1 Read-Only: N/A 00:08:20.769 Firmware Activation Without Reset: N/A 00:08:20.769 Multiple Update Detection Support: N/A 00:08:20.769 Firmware Update Granularity: No Information Provided 00:08:20.769 Per-Namespace SMART Log: Yes 00:08:20.769 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.769 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:20.769 Command Effects Log Page: Supported 00:08:20.769 Get Log Page Extended Data: Supported 00:08:20.769 Telemetry Log Pages: Not Supported 00:08:20.769 Persistent Event Log Pages: Not Supported 00:08:20.769 Supported Log Pages Log Page: May Support 00:08:20.769 Commands Supported & Effects Log Page: Not Supported 00:08:20.769 Feature Identifiers & Effects Log Page:May Support 00:08:20.769 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.769 Data Area 4 for Telemetry Log: Not Supported 00:08:20.769 Error Log Page Entries Supported: 1 00:08:20.769 Keep Alive: Not Supported 00:08:20.769 00:08:20.769 NVM Command Set Attributes 00:08:20.769 ========================== 00:08:20.769 Submission Queue Entry Size 00:08:20.769 Max: 64 00:08:20.769 Min: 64 00:08:20.769 Completion Queue Entry Size 00:08:20.769 Max: 16 00:08:20.769 Min: 16 00:08:20.769 Number of Namespaces: 256 00:08:20.769 Compare Command: Supported 00:08:20.769 Write Uncorrectable Command: Not Supported 00:08:20.769 Dataset Management Command: Supported 00:08:20.769 Write Zeroes Command: Supported 00:08:20.769 Set Features Save Field: Supported 00:08:20.769 Reservations: Not Supported 00:08:20.769 Timestamp: Supported 00:08:20.769 Copy: Supported 00:08:20.769 Volatile Write Cache: Present 00:08:20.769 Atomic Write Unit (Normal): 1 00:08:20.769 Atomic Write Unit (PFail): 1 00:08:20.769 Atomic Compare & Write Unit: 1 00:08:20.769 Fused Compare & Write: Not Supported 00:08:20.769 Scatter-Gather List 00:08:20.769 SGL Command Set: Supported 00:08:20.769 SGL Keyed: Not Supported 00:08:20.769 SGL Bit Bucket Descriptor: Not Supported 00:08:20.769 SGL Metadata Pointer: Not Supported 00:08:20.769 Oversized SGL: Not Supported 00:08:20.769 SGL Metadata Address: Not Supported 00:08:20.770 SGL Offset: Not Supported 00:08:20.770 Transport SGL Data Block: Not Supported 00:08:20.770 Replay Protected Memory Block: Not Supported 00:08:20.770 00:08:20.770 Firmware Slot Information 00:08:20.770 ========================= 00:08:20.770 Active slot: 1 00:08:20.770 Slot 1 Firmware Revision: 1.0 00:08:20.770 00:08:20.770 00:08:20.770 Commands Supported and Effects 00:08:20.770 ============================== 00:08:20.770 Admin Commands 00:08:20.770 -------------- 00:08:20.770 Delete I/O Submission Queue (00h): Supported 00:08:20.770 Create I/O Submission Queue (01h): Supported 00:08:20.770 Get Log Page (02h): Supported 00:08:20.770 Delete I/O Completion Queue (04h): Supported 00:08:20.770 Create I/O Completion Queue (05h): Supported 00:08:20.770 Identify (06h): Supported 00:08:20.770 Abort (08h): Supported 00:08:20.770 Set Features (09h): Supported 00:08:20.770 Get Features (0Ah): Supported 00:08:20.770 Asynchronous Event Request (0Ch): Supported 00:08:20.770 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.770 Directive Send (19h): Supported 00:08:20.770 Directive Receive (1Ah): Supported 00:08:20.770 Virtualization Management (1Ch): Supported 00:08:20.770 Doorbell Buffer Config (7Ch): Supported 00:08:20.770 Format NVM (80h): Supported LBA-Change 00:08:20.770 I/O Commands 00:08:20.770 ------------ 00:08:20.770 Flush (00h): Supported LBA-Change 00:08:20.770 Write (01h): Supported LBA-Change 00:08:20.770 Read (02h): Supported 00:08:20.770 Compare (05h): Supported 00:08:20.770 Write Zeroes (08h): Supported LBA-Change 00:08:20.770 Dataset Management (09h): Supported LBA-Change 00:08:20.770 Unknown (0Ch): Supported 00:08:20.770 Unknown (12h): Supported 00:08:20.770 Copy (19h): Supported LBA-Change 00:08:20.770 Unknown (1Dh): Supported LBA-Change 00:08:20.770 00:08:20.770 Error Log 00:08:20.770 ========= 00:08:20.770 00:08:20.770 Arbitration 00:08:20.770 =========== 00:08:20.770 Arbitration Burst: no limit 00:08:20.770 00:08:20.770 Power Management 00:08:20.770 ================ 00:08:20.770 Number of Power States: 1 00:08:20.770 Current Power State: Power State #0 00:08:20.770 Power State #0: 00:08:20.770 Max Power: 25.00 W 00:08:20.770 Non-Operational State: Operational 00:08:20.770 Entry Latency: 16 microseconds 00:08:20.770 Exit Latency: 4 microseconds 00:08:20.770 Relative Read Throughput: 0 00:08:20.770 Relative Read Latency: 0 00:08:20.770 Relative Write Throughput: 0 00:08:20.770 Relative Write Latency: 0 00:08:20.770 Idle Power: Not Reported 00:08:20.770 Active Power: Not Reported 00:08:20.770 Non-Operational Permissive Mode: Not Supported 00:08:20.770 00:08:20.770 Health Information 00:08:20.770 ================== 00:08:20.770 Critical Warnings: 00:08:20.770 Available Spare Space: OK 00:08:20.770 Temperature: OK 00:08:20.770 Device Reliability: OK 00:08:20.770 Read Only: No 00:08:20.770 Volatile Memory Backup: OK 00:08:20.770 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.770 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.770 Available Spare: 0% 00:08:20.770 Available Spare Threshold: 0% 00:08:20.770 Life Percentage Used: 0% 00:08:20.770 Data Units Read: 898 00:08:20.770 Data Units Written: 827 00:08:20.770 Host Read Commands: 40876 00:08:20.770 Host Write Commands: 40299 00:08:20.770 Controller Busy Time: 0 minutes 00:08:20.770 Power Cycles: 0 00:08:20.770 Power On Hours: 0 hours 00:08:20.770 Unsafe Shutdowns: 0 00:08:20.770 Unrecoverable Media Errors: 0 00:08:20.770 Lifetime Error Log Entries: 0 00:08:20.770 Warning Temperature Time: 0 minutes 00:08:20.770 Critical Temperature Time: 0 minutes 00:08:20.770 00:08:20.770 Number of Queues 00:08:20.770 ================ 00:08:20.770 Number of I/O Submission Queues: 64 00:08:20.770 Number of I/O Completion Queues: 64 00:08:20.770 00:08:20.770 ZNS Specific Controller Data 00:08:20.770 ============================ 00:08:20.770 Zone Append Size Limit: 0 00:08:20.770 00:08:20.770 00:08:20.770 Active Namespaces 00:08:20.770 ================= 00:08:20.770 Namespace ID:1 00:08:20.770 Error Recovery Timeout: Unlimited 00:08:20.770 Command Set Identifier: NVM (00h) 00:08:20.770 Deallocate: Supported 00:08:20.770 Deallocated/Unwritten Error: Supported 00:08:20.770 Deallocated Read Value: All 0x00 00:08:20.770 Deallocate in Write Zeroes: Not Supported 00:08:20.770 Deallocated Guard Field: 0xFFFF 00:08:20.770 Flush: Supported 00:08:20.770 Reservation: Not Supported 00:08:20.770 Namespace Sharing Capabilities: Multiple Controllers 00:08:20.770 Size (in LBAs): 262144 (1GiB) 00:08:20.770 Capacity (in LBAs): 262144 (1GiB) 00:08:20.770 Utilization (in LBAs): 262144 (1GiB) 00:08:20.770 Thin Provisioning: Not Supported 00:08:20.770 Per-NS Atomic Units: No 00:08:20.770 Maximum Single Source Range Length: 128 00:08:20.770 Maximum Copy Length: 128 00:08:20.770 Maximum Source Range Count: 128 00:08:20.770 NGUID/EUI64 Never Reused: No 00:08:20.770 Namespace Write Protected: No 00:08:20.770 Endurance group ID: 1 00:08:20.770 Number of LBA Formats: 8 00:08:20.770 Current LBA Format: LBA Format #04 00:08:20.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.770 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.770 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.770 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.770 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.770 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.770 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.770 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.770 00:08:20.770 Get Feature FDP: 00:08:20.770 ================ 00:08:20.770 Enabled: Yes 00:08:20.770 FDP configuration index: 0 00:08:20.770 00:08:20.770 FDP configurations log page 00:08:20.770 =========================== 00:08:20.770 Number of FDP configurations: 1 00:08:20.770 Version: 0 00:08:20.770 Size: 112 00:08:20.770 FDP Configuration Descriptor: 0 00:08:20.770 Descriptor Size: 96 00:08:20.770 Reclaim Group Identifier format: 2 00:08:20.770 FDP Volatile Write Cache: Not Present 00:08:20.770 FDP Configuration: Valid 00:08:20.770 Vendor Specific Size: 0 00:08:20.770 Number of Reclaim Groups: 2 00:08:20.770 Number of Recalim Unit Handles: 8 00:08:20.770 Max Placement Identifiers: 128 00:08:20.770 Number of Namespaces Suppprted: 256 00:08:20.770 Reclaim unit Nominal Size: 6000000 bytes 00:08:20.770 Estimated Reclaim Unit Time Limit: Not Reported 00:08:20.770 RUH Desc #000: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #001: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #002: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #003: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #004: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #005: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #006: RUH Type: Initially Isolated 00:08:20.770 RUH Desc #007: RUH Type: Initially Isolated 00:08:20.770 00:08:20.770 FDP reclaim unit handle usage log page 00:08:20.770 ====================================== 00:08:20.770 Number of Reclaim Unit Handles: 8 00:08:20.770 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:20.770 RUH Usage Desc #001: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #002: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #003: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #004: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #005: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #006: RUH Attributes: Unused 00:08:20.770 RUH Usage Desc #007: RUH Attributes: Unused 00:08:20.770 00:08:20.770 FDP statistics log page 00:08:20.770 ======================= 00:08:20.770 Host bytes with metadata written: 531931136 00:08:20.770 Medi[2024-12-16 22:04:26.917034] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76413 terminated unexpected 00:08:20.770 a bytes with metadata written: 532021248 00:08:20.770 Media bytes erased: 0 00:08:20.770 00:08:20.770 FDP events log page 00:08:20.770 =================== 00:08:20.770 Number of FDP events: 0 00:08:20.770 00:08:20.770 NVM Specific Namespace Data 00:08:20.770 =========================== 00:08:20.770 Logical Block Storage Tag Mask: 0 00:08:20.770 Protection Information Capabilities: 00:08:20.770 16b Guard Protection Information Storage Tag Support: No 00:08:20.770 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.771 Storage Tag Check Read Support: No 00:08:20.771 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.771 ===================================================== 00:08:20.771 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:20.771 ===================================================== 00:08:20.771 Controller Capabilities/Features 00:08:20.771 ================================ 00:08:20.771 Vendor ID: 1b36 00:08:20.771 Subsystem Vendor ID: 1af4 00:08:20.771 Serial Number: 12342 00:08:20.771 Model Number: QEMU NVMe Ctrl 00:08:20.771 Firmware Version: 8.0.0 00:08:20.771 Recommended Arb Burst: 6 00:08:20.771 IEEE OUI Identifier: 00 54 52 00:08:20.771 Multi-path I/O 00:08:20.771 May have multiple subsystem ports: No 00:08:20.771 May have multiple controllers: No 00:08:20.771 Associated with SR-IOV VF: No 00:08:20.771 Max Data Transfer Size: 524288 00:08:20.771 Max Number of Namespaces: 256 00:08:20.771 Max Number of I/O Queues: 64 00:08:20.771 NVMe Specification Version (VS): 1.4 00:08:20.771 NVMe Specification Version (Identify): 1.4 00:08:20.771 Maximum Queue Entries: 2048 00:08:20.771 Contiguous Queues Required: Yes 00:08:20.771 Arbitration Mechanisms Supported 00:08:20.771 Weighted Round Robin: Not Supported 00:08:20.771 Vendor Specific: Not Supported 00:08:20.771 Reset Timeout: 7500 ms 00:08:20.771 Doorbell Stride: 4 bytes 00:08:20.771 NVM Subsystem Reset: Not Supported 00:08:20.771 Command Sets Supported 00:08:20.771 NVM Command Set: Supported 00:08:20.771 Boot Partition: Not Supported 00:08:20.771 Memory Page Size Minimum: 4096 bytes 00:08:20.771 Memory Page Size Maximum: 65536 bytes 00:08:20.771 Persistent Memory Region: Not Supported 00:08:20.771 Optional Asynchronous Events Supported 00:08:20.771 Namespace Attribute Notices: Supported 00:08:20.771 Firmware Activation Notices: Not Supported 00:08:20.771 ANA Change Notices: Not Supported 00:08:20.771 PLE Aggregate Log Change Notices: Not Supported 00:08:20.771 LBA Status Info Alert Notices: Not Supported 00:08:20.771 EGE Aggregate Log Change Notices: Not Supported 00:08:20.771 Normal NVM Subsystem Shutdown event: Not Supported 00:08:20.771 Zone Descriptor Change Notices: Not Supported 00:08:20.771 Discovery Log Change Notices: Not Supported 00:08:20.771 Controller Attributes 00:08:20.771 128-bit Host Identifier: Not Supported 00:08:20.771 Non-Operational Permissive Mode: Not Supported 00:08:20.771 NVM Sets: Not Supported 00:08:20.771 Read Recovery Levels: Not Supported 00:08:20.771 Endurance Groups: Not Supported 00:08:20.771 Predictable Latency Mode: Not Supported 00:08:20.771 Traffic Based Keep ALive: Not Supported 00:08:20.771 Namespace Granularity: Not Supported 00:08:20.771 SQ Associations: Not Supported 00:08:20.771 UUID List: Not Supported 00:08:20.771 Multi-Domain Subsystem: Not Supported 00:08:20.771 Fixed Capacity Management: Not Supported 00:08:20.771 Variable Capacity Management: Not Supported 00:08:20.771 Delete Endurance Group: Not Supported 00:08:20.771 Delete NVM Set: Not Supported 00:08:20.771 Extended LBA Formats Supported: Supported 00:08:20.771 Flexible Data Placement Supported: Not Supported 00:08:20.771 00:08:20.771 Controller Memory Buffer Support 00:08:20.771 ================================ 00:08:20.771 Supported: No 00:08:20.771 00:08:20.771 Persistent Memory Region Support 00:08:20.771 ================================ 00:08:20.771 Supported: No 00:08:20.771 00:08:20.771 Admin Command Set Attributes 00:08:20.771 ============================ 00:08:20.771 Security Send/Receive: Not Supported 00:08:20.771 Format NVM: Supported 00:08:20.771 Firmware Activate/Download: Not Supported 00:08:20.771 Namespace Management: Supported 00:08:20.771 Device Self-Test: Not Supported 00:08:20.771 Directives: Supported 00:08:20.771 NVMe-MI: Not Supported 00:08:20.771 Virtualization Management: Not Supported 00:08:20.771 Doorbell Buffer Config: Supported 00:08:20.771 Get LBA Status Capability: Not Supported 00:08:20.771 Command & Feature Lockdown Capability: Not Supported 00:08:20.771 Abort Command Limit: 4 00:08:20.771 Async Event Request Limit: 4 00:08:20.771 Number of Firmware Slots: N/A 00:08:20.771 Firmware Slot 1 Read-Only: N/A 00:08:20.771 Firmware Activation Without Reset: N/A 00:08:20.771 Multiple Update Detection Support: N/A 00:08:20.771 Firmware Update Granularity: No Information Provided 00:08:20.771 Per-Namespace SMART Log: Yes 00:08:20.771 Asymmetric Namespace Access Log Page: Not Supported 00:08:20.771 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:20.771 Command Effects Log Page: Supported 00:08:20.771 Get Log Page Extended Data: Supported 00:08:20.771 Telemetry Log Pages: Not Supported 00:08:20.771 Persistent Event Log Pages: Not Supported 00:08:20.771 Supported Log Pages Log Page: May Support 00:08:20.771 Commands Supported & Effects Log Page: Not Supported 00:08:20.771 Feature Identifiers & Effects Log Page:May Support 00:08:20.771 NVMe-MI Commands & Effects Log Page: May Support 00:08:20.771 Data Area 4 for Telemetry Log: Not Supported 00:08:20.771 Error Log Page Entries Supported: 1 00:08:20.771 Keep Alive: Not Supported 00:08:20.771 00:08:20.771 NVM Command Set Attributes 00:08:20.771 ========================== 00:08:20.771 Submission Queue Entry Size 00:08:20.771 Max: 64 00:08:20.771 Min: 64 00:08:20.771 Completion Queue Entry Size 00:08:20.771 Max: 16 00:08:20.771 Min: 16 00:08:20.771 Number of Namespaces: 256 00:08:20.771 Compare Command: Supported 00:08:20.771 Write Uncorrectable Command: Not Supported 00:08:20.771 Dataset Management Command: Supported 00:08:20.771 Write Zeroes Command: Supported 00:08:20.771 Set Features Save Field: Supported 00:08:20.771 Reservations: Not Supported 00:08:20.771 Timestamp: Supported 00:08:20.771 Copy: Supported 00:08:20.771 Volatile Write Cache: Present 00:08:20.771 Atomic Write Unit (Normal): 1 00:08:20.771 Atomic Write Unit (PFail): 1 00:08:20.771 Atomic Compare & Write Unit: 1 00:08:20.771 Fused Compare & Write: Not Supported 00:08:20.771 Scatter-Gather List 00:08:20.771 SGL Command Set: Supported 00:08:20.771 SGL Keyed: Not Supported 00:08:20.771 SGL Bit Bucket Descriptor: Not Supported 00:08:20.771 SGL Metadata Pointer: Not Supported 00:08:20.771 Oversized SGL: Not Supported 00:08:20.771 SGL Metadata Address: Not Supported 00:08:20.771 SGL Offset: Not Supported 00:08:20.771 Transport SGL Data Block: Not Supported 00:08:20.771 Replay Protected Memory Block: Not Supported 00:08:20.771 00:08:20.771 Firmware Slot Information 00:08:20.771 ========================= 00:08:20.771 Active slot: 1 00:08:20.771 Slot 1 Firmware Revision: 1.0 00:08:20.771 00:08:20.771 00:08:20.772 Commands Supported and Effects 00:08:20.772 ============================== 00:08:20.772 Admin Commands 00:08:20.772 -------------- 00:08:20.772 Delete I/O Submission Queue (00h): Supported 00:08:20.772 Create I/O Submission Queue (01h): Supported 00:08:20.772 Get Log Page (02h): Supported 00:08:20.772 Delete I/O Completion Queue (04h): Supported 00:08:20.772 Create I/O Completion Queue (05h): Supported 00:08:20.772 Identify (06h): Supported 00:08:20.772 Abort (08h): Supported 00:08:20.772 Set Features (09h): Supported 00:08:20.772 Get Features (0Ah): Supported 00:08:20.772 Asynchronous Event Request (0Ch): Supported 00:08:20.772 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:20.772 Directive Send (19h): Supported 00:08:20.772 Directive Receive (1Ah): Supported 00:08:20.772 Virtualization Management (1Ch): Supported 00:08:20.772 Doorbell Buffer Config (7Ch): Supported 00:08:20.772 Format NVM (80h): Supported LBA-Change 00:08:20.772 I/O Commands 00:08:20.772 ------------ 00:08:20.772 Flush (00h): Supported LBA-Change 00:08:20.772 Write (01h): Supported LBA-Change 00:08:20.772 Read (02h): Supported 00:08:20.772 Compare (05h): Supported 00:08:20.772 Write Zeroes (08h): Supported LBA-Change 00:08:20.772 Dataset Management (09h): Supported LBA-Change 00:08:20.772 Unknown (0Ch): Supported 00:08:20.772 Unknown (12h): Supported 00:08:20.772 Copy (19h): Supported LBA-Change 00:08:20.772 Unknown (1Dh): Supported LBA-Change 00:08:20.772 00:08:20.772 Error Log 00:08:20.772 ========= 00:08:20.772 00:08:20.772 Arbitration 00:08:20.772 =========== 00:08:20.772 Arbitration Burst: no limit 00:08:20.772 00:08:20.772 Power Management 00:08:20.772 ================ 00:08:20.772 Number of Power States: 1 00:08:20.772 Current Power State: Power State #0 00:08:20.772 Power State #0: 00:08:20.772 Max Power: 25.00 W 00:08:20.772 Non-Operational State: Operational 00:08:20.772 Entry Latency: 16 microseconds 00:08:20.772 Exit Latency: 4 microseconds 00:08:20.772 Relative Read Throughput: 0 00:08:20.772 Relative Read Latency: 0 00:08:20.772 Relative Write Throughput: 0 00:08:20.772 Relative Write Latency: 0 00:08:20.772 Idle Power: Not Reported 00:08:20.772 Active Power: Not Reported 00:08:20.772 Non-Operational Permissive Mode: Not Supported 00:08:20.772 00:08:20.772 Health Information 00:08:20.772 ================== 00:08:20.772 Critical Warnings: 00:08:20.772 Available Spare Space: OK 00:08:20.772 Temperature: OK 00:08:20.772 Device Reliability: OK 00:08:20.772 Read Only: No 00:08:20.772 Volatile Memory Backup: OK 00:08:20.772 Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.772 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:20.772 Available Spare: 0% 00:08:20.772 Available Spare Threshold: 0% 00:08:20.772 Life Percentage Used: 0% 00:08:20.772 Data Units Read: 2314 00:08:20.772 Data Units Written: 2101 00:08:20.772 Host Read Commands: 119478 00:08:20.772 Host Write Commands: 117747 00:08:20.772 Controller Busy Time: 0 minutes 00:08:20.772 Power Cycles: 0 00:08:20.772 Power On Hours: 0 hours 00:08:20.772 Unsafe Shutdowns: 0 00:08:20.772 Unrecoverable Media Errors: 0 00:08:20.772 Lifetime Error Log Entries: 0 00:08:20.772 Warning Temperature Time: 0 minutes 00:08:20.772 Critical Temperature Time: 0 minutes 00:08:20.772 00:08:20.772 Number of Queues 00:08:20.772 ================ 00:08:20.772 Number of I/O Submission Queues: 64 00:08:20.772 Number of I/O Completion Queues: 64 00:08:20.772 00:08:20.772 ZNS Specific Controller Data 00:08:20.772 ============================ 00:08:20.772 Zone Append Size Limit: 0 00:08:20.772 00:08:20.772 00:08:20.772 Active Namespaces 00:08:20.772 ================= 00:08:20.772 Namespace ID:1 00:08:20.772 Error Recovery Timeout: Unlimited 00:08:20.772 Command Set Identifier: NVM (00h) 00:08:20.772 Deallocate: Supported 00:08:20.772 Deallocated/Unwritten Error: Supported 00:08:20.772 Deallocated Read Value: All 0x00 00:08:20.772 Deallocate in Write Zeroes: Not Supported 00:08:20.772 Deallocated Guard Field: 0xFFFF 00:08:20.772 Flush: Supported 00:08:20.772 Reservation: Not Supported 00:08:20.772 Namespace Sharing Capabilities: Private 00:08:20.772 Size (in LBAs): 1048576 (4GiB) 00:08:20.772 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.772 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.772 Thin Provisioning: Not Supported 00:08:20.772 Per-NS Atomic Units: No 00:08:20.772 Maximum Single Source Range Length: 128 00:08:20.772 Maximum Copy Length: 128 00:08:20.772 Maximum Source Range Count: 128 00:08:20.772 NGUID/EUI64 Never Reused: No 00:08:20.772 Namespace Write Protected: No 00:08:20.772 Number of LBA Formats: 8 00:08:20.772 Current LBA Format: LBA Format #04 00:08:20.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.772 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.772 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.772 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.772 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.772 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.772 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.772 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.772 00:08:20.772 NVM Specific Namespace Data 00:08:20.772 =========================== 00:08:20.772 Logical Block Storage Tag Mask: 0 00:08:20.772 Protection Information Capabilities: 00:08:20.772 16b Guard Protection Information Storage Tag Support: No 00:08:20.772 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.772 Storage Tag Check Read Support: No 00:08:20.772 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Namespace ID:2 00:08:20.772 Error Recovery Timeout: Unlimited 00:08:20.772 Command Set Identifier: NVM (00h) 00:08:20.772 Deallocate: Supported 00:08:20.772 Deallocated/Unwritten Error: Supported 00:08:20.772 Deallocated Read Value: All 0x00 00:08:20.772 Deallocate in Write Zeroes: Not Supported 00:08:20.772 Deallocated Guard Field: 0xFFFF 00:08:20.772 Flush: Supported 00:08:20.772 Reservation: Not Supported 00:08:20.772 Namespace Sharing Capabilities: Private 00:08:20.772 Size (in LBAs): 1048576 (4GiB) 00:08:20.772 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.772 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.772 Thin Provisioning: Not Supported 00:08:20.772 Per-NS Atomic Units: No 00:08:20.772 Maximum Single Source Range Length: 128 00:08:20.772 Maximum Copy Length: 128 00:08:20.772 Maximum Source Range Count: 128 00:08:20.772 NGUID/EUI64 Never Reused: No 00:08:20.772 Namespace Write Protected: No 00:08:20.772 Number of LBA Formats: 8 00:08:20.772 Current LBA Format: LBA Format #04 00:08:20.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.772 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.772 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.772 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.772 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.772 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.772 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.772 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.772 00:08:20.772 NVM Specific Namespace Data 00:08:20.772 =========================== 00:08:20.772 Logical Block Storage Tag Mask: 0 00:08:20.772 Protection Information Capabilities: 00:08:20.772 16b Guard Protection Information Storage Tag Support: No 00:08:20.772 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.772 Storage Tag Check Read Support: No 00:08:20.772 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.772 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Namespace ID:3 00:08:20.773 Error Recovery Timeout: Unlimited 00:08:20.773 Command Set Identifier: NVM (00h) 00:08:20.773 Deallocate: Supported 00:08:20.773 Deallocated/Unwritten Error: Supported 00:08:20.773 Deallocated Read Value: All 0x00 00:08:20.773 Deallocate in Write Zeroes: Not Supported 00:08:20.773 Deallocated Guard Field: 0xFFFF 00:08:20.773 Flush: Supported 00:08:20.773 Reservation: Not Supported 00:08:20.773 Namespace Sharing Capabilities: Private 00:08:20.773 Size (in LBAs): 1048576 (4GiB) 00:08:20.773 Capacity (in LBAs): 1048576 (4GiB) 00:08:20.773 Utilization (in LBAs): 1048576 (4GiB) 00:08:20.773 Thin Provisioning: Not Supported 00:08:20.773 Per-NS Atomic Units: No 00:08:20.773 Maximum Single Source Range Length: 128 00:08:20.773 Maximum Copy Length: 128 00:08:20.773 Maximum Source Range Count: 128 00:08:20.773 NGUID/EUI64 Never Reused: No 00:08:20.773 Namespace Write Protected: No 00:08:20.773 Number of LBA Formats: 8 00:08:20.773 Current LBA Format: LBA Format #04 00:08:20.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:20.773 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:20.773 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:20.773 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:20.773 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:20.773 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:20.773 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:20.773 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:20.773 00:08:20.773 NVM Specific Namespace Data 00:08:20.773 =========================== 00:08:20.773 Logical Block Storage Tag Mask: 0 00:08:20.773 Protection Information Capabilities: 00:08:20.773 16b Guard Protection Information Storage Tag Support: No 00:08:20.773 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:20.773 Storage Tag Check Read Support: No 00:08:20.773 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:20.773 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:20.773 22:04:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:21.035 ===================================================== 00:08:21.035 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.035 ===================================================== 00:08:21.035 Controller Capabilities/Features 00:08:21.035 ================================ 00:08:21.035 Vendor ID: 1b36 00:08:21.035 Subsystem Vendor ID: 1af4 00:08:21.035 Serial Number: 12340 00:08:21.035 Model Number: QEMU NVMe Ctrl 00:08:21.035 Firmware Version: 8.0.0 00:08:21.035 Recommended Arb Burst: 6 00:08:21.035 IEEE OUI Identifier: 00 54 52 00:08:21.035 Multi-path I/O 00:08:21.035 May have multiple subsystem ports: No 00:08:21.035 May have multiple controllers: No 00:08:21.035 Associated with SR-IOV VF: No 00:08:21.035 Max Data Transfer Size: 524288 00:08:21.035 Max Number of Namespaces: 256 00:08:21.035 Max Number of I/O Queues: 64 00:08:21.035 NVMe Specification Version (VS): 1.4 00:08:21.035 NVMe Specification Version (Identify): 1.4 00:08:21.035 Maximum Queue Entries: 2048 00:08:21.035 Contiguous Queues Required: Yes 00:08:21.035 Arbitration Mechanisms Supported 00:08:21.035 Weighted Round Robin: Not Supported 00:08:21.035 Vendor Specific: Not Supported 00:08:21.035 Reset Timeout: 7500 ms 00:08:21.035 Doorbell Stride: 4 bytes 00:08:21.035 NVM Subsystem Reset: Not Supported 00:08:21.035 Command Sets Supported 00:08:21.035 NVM Command Set: Supported 00:08:21.035 Boot Partition: Not Supported 00:08:21.035 Memory Page Size Minimum: 4096 bytes 00:08:21.035 Memory Page Size Maximum: 65536 bytes 00:08:21.035 Persistent Memory Region: Not Supported 00:08:21.035 Optional Asynchronous Events Supported 00:08:21.035 Namespace Attribute Notices: Supported 00:08:21.035 Firmware Activation Notices: Not Supported 00:08:21.035 ANA Change Notices: Not Supported 00:08:21.035 PLE Aggregate Log Change Notices: Not Supported 00:08:21.035 LBA Status Info Alert Notices: Not Supported 00:08:21.035 EGE Aggregate Log Change Notices: Not Supported 00:08:21.035 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.035 Zone Descriptor Change Notices: Not Supported 00:08:21.035 Discovery Log Change Notices: Not Supported 00:08:21.035 Controller Attributes 00:08:21.035 128-bit Host Identifier: Not Supported 00:08:21.035 Non-Operational Permissive Mode: Not Supported 00:08:21.035 NVM Sets: Not Supported 00:08:21.035 Read Recovery Levels: Not Supported 00:08:21.035 Endurance Groups: Not Supported 00:08:21.035 Predictable Latency Mode: Not Supported 00:08:21.035 Traffic Based Keep ALive: Not Supported 00:08:21.035 Namespace Granularity: Not Supported 00:08:21.035 SQ Associations: Not Supported 00:08:21.035 UUID List: Not Supported 00:08:21.035 Multi-Domain Subsystem: Not Supported 00:08:21.035 Fixed Capacity Management: Not Supported 00:08:21.035 Variable Capacity Management: Not Supported 00:08:21.035 Delete Endurance Group: Not Supported 00:08:21.035 Delete NVM Set: Not Supported 00:08:21.035 Extended LBA Formats Supported: Supported 00:08:21.035 Flexible Data Placement Supported: Not Supported 00:08:21.035 00:08:21.035 Controller Memory Buffer Support 00:08:21.035 ================================ 00:08:21.035 Supported: No 00:08:21.035 00:08:21.035 Persistent Memory Region Support 00:08:21.035 ================================ 00:08:21.035 Supported: No 00:08:21.035 00:08:21.035 Admin Command Set Attributes 00:08:21.035 ============================ 00:08:21.035 Security Send/Receive: Not Supported 00:08:21.035 Format NVM: Supported 00:08:21.035 Firmware Activate/Download: Not Supported 00:08:21.035 Namespace Management: Supported 00:08:21.035 Device Self-Test: Not Supported 00:08:21.035 Directives: Supported 00:08:21.035 NVMe-MI: Not Supported 00:08:21.035 Virtualization Management: Not Supported 00:08:21.035 Doorbell Buffer Config: Supported 00:08:21.035 Get LBA Status Capability: Not Supported 00:08:21.035 Command & Feature Lockdown Capability: Not Supported 00:08:21.035 Abort Command Limit: 4 00:08:21.035 Async Event Request Limit: 4 00:08:21.035 Number of Firmware Slots: N/A 00:08:21.035 Firmware Slot 1 Read-Only: N/A 00:08:21.035 Firmware Activation Without Reset: N/A 00:08:21.035 Multiple Update Detection Support: N/A 00:08:21.035 Firmware Update Granularity: No Information Provided 00:08:21.035 Per-Namespace SMART Log: Yes 00:08:21.035 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.035 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:21.035 Command Effects Log Page: Supported 00:08:21.035 Get Log Page Extended Data: Supported 00:08:21.035 Telemetry Log Pages: Not Supported 00:08:21.035 Persistent Event Log Pages: Not Supported 00:08:21.035 Supported Log Pages Log Page: May Support 00:08:21.035 Commands Supported & Effects Log Page: Not Supported 00:08:21.036 Feature Identifiers & Effects Log Page:May Support 00:08:21.036 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.036 Data Area 4 for Telemetry Log: Not Supported 00:08:21.036 Error Log Page Entries Supported: 1 00:08:21.036 Keep Alive: Not Supported 00:08:21.036 00:08:21.036 NVM Command Set Attributes 00:08:21.036 ========================== 00:08:21.036 Submission Queue Entry Size 00:08:21.036 Max: 64 00:08:21.036 Min: 64 00:08:21.036 Completion Queue Entry Size 00:08:21.036 Max: 16 00:08:21.036 Min: 16 00:08:21.036 Number of Namespaces: 256 00:08:21.036 Compare Command: Supported 00:08:21.036 Write Uncorrectable Command: Not Supported 00:08:21.036 Dataset Management Command: Supported 00:08:21.036 Write Zeroes Command: Supported 00:08:21.036 Set Features Save Field: Supported 00:08:21.036 Reservations: Not Supported 00:08:21.036 Timestamp: Supported 00:08:21.036 Copy: Supported 00:08:21.036 Volatile Write Cache: Present 00:08:21.036 Atomic Write Unit (Normal): 1 00:08:21.036 Atomic Write Unit (PFail): 1 00:08:21.036 Atomic Compare & Write Unit: 1 00:08:21.036 Fused Compare & Write: Not Supported 00:08:21.036 Scatter-Gather List 00:08:21.036 SGL Command Set: Supported 00:08:21.036 SGL Keyed: Not Supported 00:08:21.036 SGL Bit Bucket Descriptor: Not Supported 00:08:21.036 SGL Metadata Pointer: Not Supported 00:08:21.036 Oversized SGL: Not Supported 00:08:21.036 SGL Metadata Address: Not Supported 00:08:21.036 SGL Offset: Not Supported 00:08:21.036 Transport SGL Data Block: Not Supported 00:08:21.036 Replay Protected Memory Block: Not Supported 00:08:21.036 00:08:21.036 Firmware Slot Information 00:08:21.036 ========================= 00:08:21.036 Active slot: 1 00:08:21.036 Slot 1 Firmware Revision: 1.0 00:08:21.036 00:08:21.036 00:08:21.036 Commands Supported and Effects 00:08:21.036 ============================== 00:08:21.036 Admin Commands 00:08:21.036 -------------- 00:08:21.036 Delete I/O Submission Queue (00h): Supported 00:08:21.036 Create I/O Submission Queue (01h): Supported 00:08:21.036 Get Log Page (02h): Supported 00:08:21.036 Delete I/O Completion Queue (04h): Supported 00:08:21.036 Create I/O Completion Queue (05h): Supported 00:08:21.036 Identify (06h): Supported 00:08:21.036 Abort (08h): Supported 00:08:21.036 Set Features (09h): Supported 00:08:21.036 Get Features (0Ah): Supported 00:08:21.036 Asynchronous Event Request (0Ch): Supported 00:08:21.036 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.036 Directive Send (19h): Supported 00:08:21.036 Directive Receive (1Ah): Supported 00:08:21.036 Virtualization Management (1Ch): Supported 00:08:21.036 Doorbell Buffer Config (7Ch): Supported 00:08:21.036 Format NVM (80h): Supported LBA-Change 00:08:21.036 I/O Commands 00:08:21.036 ------------ 00:08:21.036 Flush (00h): Supported LBA-Change 00:08:21.036 Write (01h): Supported LBA-Change 00:08:21.036 Read (02h): Supported 00:08:21.036 Compare (05h): Supported 00:08:21.036 Write Zeroes (08h): Supported LBA-Change 00:08:21.036 Dataset Management (09h): Supported LBA-Change 00:08:21.036 Unknown (0Ch): Supported 00:08:21.036 Unknown (12h): Supported 00:08:21.036 Copy (19h): Supported LBA-Change 00:08:21.036 Unknown (1Dh): Supported LBA-Change 00:08:21.036 00:08:21.036 Error Log 00:08:21.036 ========= 00:08:21.036 00:08:21.036 Arbitration 00:08:21.036 =========== 00:08:21.036 Arbitration Burst: no limit 00:08:21.036 00:08:21.036 Power Management 00:08:21.036 ================ 00:08:21.036 Number of Power States: 1 00:08:21.036 Current Power State: Power State #0 00:08:21.036 Power State #0: 00:08:21.036 Max Power: 25.00 W 00:08:21.036 Non-Operational State: Operational 00:08:21.036 Entry Latency: 16 microseconds 00:08:21.036 Exit Latency: 4 microseconds 00:08:21.036 Relative Read Throughput: 0 00:08:21.036 Relative Read Latency: 0 00:08:21.036 Relative Write Throughput: 0 00:08:21.036 Relative Write Latency: 0 00:08:21.036 Idle Power: Not Reported 00:08:21.036 Active Power: Not Reported 00:08:21.036 Non-Operational Permissive Mode: Not Supported 00:08:21.036 00:08:21.036 Health Information 00:08:21.036 ================== 00:08:21.036 Critical Warnings: 00:08:21.036 Available Spare Space: OK 00:08:21.036 Temperature: OK 00:08:21.036 Device Reliability: OK 00:08:21.036 Read Only: No 00:08:21.036 Volatile Memory Backup: OK 00:08:21.036 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.036 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.036 Available Spare: 0% 00:08:21.036 Available Spare Threshold: 0% 00:08:21.036 Life Percentage Used: 0% 00:08:21.036 Data Units Read: 708 00:08:21.036 Data Units Written: 636 00:08:21.036 Host Read Commands: 38852 00:08:21.036 Host Write Commands: 38638 00:08:21.036 Controller Busy Time: 0 minutes 00:08:21.036 Power Cycles: 0 00:08:21.036 Power On Hours: 0 hours 00:08:21.036 Unsafe Shutdowns: 0 00:08:21.036 Unrecoverable Media Errors: 0 00:08:21.036 Lifetime Error Log Entries: 0 00:08:21.036 Warning Temperature Time: 0 minutes 00:08:21.036 Critical Temperature Time: 0 minutes 00:08:21.036 00:08:21.036 Number of Queues 00:08:21.036 ================ 00:08:21.036 Number of I/O Submission Queues: 64 00:08:21.036 Number of I/O Completion Queues: 64 00:08:21.036 00:08:21.036 ZNS Specific Controller Data 00:08:21.036 ============================ 00:08:21.036 Zone Append Size Limit: 0 00:08:21.036 00:08:21.036 00:08:21.036 Active Namespaces 00:08:21.036 ================= 00:08:21.036 Namespace ID:1 00:08:21.036 Error Recovery Timeout: Unlimited 00:08:21.036 Command Set Identifier: NVM (00h) 00:08:21.036 Deallocate: Supported 00:08:21.036 Deallocated/Unwritten Error: Supported 00:08:21.036 Deallocated Read Value: All 0x00 00:08:21.036 Deallocate in Write Zeroes: Not Supported 00:08:21.036 Deallocated Guard Field: 0xFFFF 00:08:21.036 Flush: Supported 00:08:21.036 Reservation: Not Supported 00:08:21.036 Metadata Transferred as: Separate Metadata Buffer 00:08:21.036 Namespace Sharing Capabilities: Private 00:08:21.036 Size (in LBAs): 1548666 (5GiB) 00:08:21.036 Capacity (in LBAs): 1548666 (5GiB) 00:08:21.036 Utilization (in LBAs): 1548666 (5GiB) 00:08:21.036 Thin Provisioning: Not Supported 00:08:21.036 Per-NS Atomic Units: No 00:08:21.036 Maximum Single Source Range Length: 128 00:08:21.036 Maximum Copy Length: 128 00:08:21.036 Maximum Source Range Count: 128 00:08:21.036 NGUID/EUI64 Never Reused: No 00:08:21.036 Namespace Write Protected: No 00:08:21.036 Number of LBA Formats: 8 00:08:21.036 Current LBA Format: LBA Format #07 00:08:21.036 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.036 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.036 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.036 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.036 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.036 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.036 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.036 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.036 00:08:21.036 NVM Specific Namespace Data 00:08:21.036 =========================== 00:08:21.036 Logical Block Storage Tag Mask: 0 00:08:21.036 Protection Information Capabilities: 00:08:21.036 16b Guard Protection Information Storage Tag Support: No 00:08:21.036 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.036 Storage Tag Check Read Support: No 00:08:21.036 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.036 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.037 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:21.037 ===================================================== 00:08:21.037 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.037 ===================================================== 00:08:21.037 Controller Capabilities/Features 00:08:21.037 ================================ 00:08:21.037 Vendor ID: 1b36 00:08:21.037 Subsystem Vendor ID: 1af4 00:08:21.037 Serial Number: 12341 00:08:21.037 Model Number: QEMU NVMe Ctrl 00:08:21.037 Firmware Version: 8.0.0 00:08:21.037 Recommended Arb Burst: 6 00:08:21.037 IEEE OUI Identifier: 00 54 52 00:08:21.037 Multi-path I/O 00:08:21.037 May have multiple subsystem ports: No 00:08:21.037 May have multiple controllers: No 00:08:21.037 Associated with SR-IOV VF: No 00:08:21.037 Max Data Transfer Size: 524288 00:08:21.037 Max Number of Namespaces: 256 00:08:21.037 Max Number of I/O Queues: 64 00:08:21.037 NVMe Specification Version (VS): 1.4 00:08:21.037 NVMe Specification Version (Identify): 1.4 00:08:21.037 Maximum Queue Entries: 2048 00:08:21.037 Contiguous Queues Required: Yes 00:08:21.037 Arbitration Mechanisms Supported 00:08:21.037 Weighted Round Robin: Not Supported 00:08:21.037 Vendor Specific: Not Supported 00:08:21.037 Reset Timeout: 7500 ms 00:08:21.037 Doorbell Stride: 4 bytes 00:08:21.037 NVM Subsystem Reset: Not Supported 00:08:21.037 Command Sets Supported 00:08:21.037 NVM Command Set: Supported 00:08:21.037 Boot Partition: Not Supported 00:08:21.037 Memory Page Size Minimum: 4096 bytes 00:08:21.037 Memory Page Size Maximum: 65536 bytes 00:08:21.037 Persistent Memory Region: Not Supported 00:08:21.037 Optional Asynchronous Events Supported 00:08:21.037 Namespace Attribute Notices: Supported 00:08:21.037 Firmware Activation Notices: Not Supported 00:08:21.037 ANA Change Notices: Not Supported 00:08:21.037 PLE Aggregate Log Change Notices: Not Supported 00:08:21.037 LBA Status Info Alert Notices: Not Supported 00:08:21.037 EGE Aggregate Log Change Notices: Not Supported 00:08:21.037 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.037 Zone Descriptor Change Notices: Not Supported 00:08:21.037 Discovery Log Change Notices: Not Supported 00:08:21.037 Controller Attributes 00:08:21.037 128-bit Host Identifier: Not Supported 00:08:21.037 Non-Operational Permissive Mode: Not Supported 00:08:21.037 NVM Sets: Not Supported 00:08:21.037 Read Recovery Levels: Not Supported 00:08:21.037 Endurance Groups: Not Supported 00:08:21.037 Predictable Latency Mode: Not Supported 00:08:21.037 Traffic Based Keep ALive: Not Supported 00:08:21.037 Namespace Granularity: Not Supported 00:08:21.037 SQ Associations: Not Supported 00:08:21.037 UUID List: Not Supported 00:08:21.037 Multi-Domain Subsystem: Not Supported 00:08:21.037 Fixed Capacity Management: Not Supported 00:08:21.037 Variable Capacity Management: Not Supported 00:08:21.037 Delete Endurance Group: Not Supported 00:08:21.037 Delete NVM Set: Not Supported 00:08:21.037 Extended LBA Formats Supported: Supported 00:08:21.037 Flexible Data Placement Supported: Not Supported 00:08:21.037 00:08:21.037 Controller Memory Buffer Support 00:08:21.037 ================================ 00:08:21.037 Supported: No 00:08:21.037 00:08:21.037 Persistent Memory Region Support 00:08:21.037 ================================ 00:08:21.037 Supported: No 00:08:21.037 00:08:21.037 Admin Command Set Attributes 00:08:21.037 ============================ 00:08:21.037 Security Send/Receive: Not Supported 00:08:21.037 Format NVM: Supported 00:08:21.037 Firmware Activate/Download: Not Supported 00:08:21.037 Namespace Management: Supported 00:08:21.037 Device Self-Test: Not Supported 00:08:21.037 Directives: Supported 00:08:21.037 NVMe-MI: Not Supported 00:08:21.037 Virtualization Management: Not Supported 00:08:21.037 Doorbell Buffer Config: Supported 00:08:21.037 Get LBA Status Capability: Not Supported 00:08:21.037 Command & Feature Lockdown Capability: Not Supported 00:08:21.037 Abort Command Limit: 4 00:08:21.037 Async Event Request Limit: 4 00:08:21.037 Number of Firmware Slots: N/A 00:08:21.037 Firmware Slot 1 Read-Only: N/A 00:08:21.037 Firmware Activation Without Reset: N/A 00:08:21.037 Multiple Update Detection Support: N/A 00:08:21.037 Firmware Update Granularity: No Information Provided 00:08:21.037 Per-Namespace SMART Log: Yes 00:08:21.037 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.037 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.037 Command Effects Log Page: Supported 00:08:21.037 Get Log Page Extended Data: Supported 00:08:21.037 Telemetry Log Pages: Not Supported 00:08:21.037 Persistent Event Log Pages: Not Supported 00:08:21.037 Supported Log Pages Log Page: May Support 00:08:21.037 Commands Supported & Effects Log Page: Not Supported 00:08:21.037 Feature Identifiers & Effects Log Page:May Support 00:08:21.037 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.037 Data Area 4 for Telemetry Log: Not Supported 00:08:21.037 Error Log Page Entries Supported: 1 00:08:21.037 Keep Alive: Not Supported 00:08:21.037 00:08:21.037 NVM Command Set Attributes 00:08:21.037 ========================== 00:08:21.037 Submission Queue Entry Size 00:08:21.037 Max: 64 00:08:21.037 Min: 64 00:08:21.037 Completion Queue Entry Size 00:08:21.037 Max: 16 00:08:21.037 Min: 16 00:08:21.037 Number of Namespaces: 256 00:08:21.037 Compare Command: Supported 00:08:21.037 Write Uncorrectable Command: Not Supported 00:08:21.037 Dataset Management Command: Supported 00:08:21.037 Write Zeroes Command: Supported 00:08:21.037 Set Features Save Field: Supported 00:08:21.037 Reservations: Not Supported 00:08:21.037 Timestamp: Supported 00:08:21.037 Copy: Supported 00:08:21.037 Volatile Write Cache: Present 00:08:21.037 Atomic Write Unit (Normal): 1 00:08:21.037 Atomic Write Unit (PFail): 1 00:08:21.037 Atomic Compare & Write Unit: 1 00:08:21.037 Fused Compare & Write: Not Supported 00:08:21.037 Scatter-Gather List 00:08:21.037 SGL Command Set: Supported 00:08:21.037 SGL Keyed: Not Supported 00:08:21.037 SGL Bit Bucket Descriptor: Not Supported 00:08:21.037 SGL Metadata Pointer: Not Supported 00:08:21.037 Oversized SGL: Not Supported 00:08:21.037 SGL Metadata Address: Not Supported 00:08:21.037 SGL Offset: Not Supported 00:08:21.037 Transport SGL Data Block: Not Supported 00:08:21.037 Replay Protected Memory Block: Not Supported 00:08:21.037 00:08:21.037 Firmware Slot Information 00:08:21.037 ========================= 00:08:21.037 Active slot: 1 00:08:21.037 Slot 1 Firmware Revision: 1.0 00:08:21.037 00:08:21.037 00:08:21.037 Commands Supported and Effects 00:08:21.037 ============================== 00:08:21.037 Admin Commands 00:08:21.037 -------------- 00:08:21.037 Delete I/O Submission Queue (00h): Supported 00:08:21.037 Create I/O Submission Queue (01h): Supported 00:08:21.037 Get Log Page (02h): Supported 00:08:21.037 Delete I/O Completion Queue (04h): Supported 00:08:21.037 Create I/O Completion Queue (05h): Supported 00:08:21.037 Identify (06h): Supported 00:08:21.037 Abort (08h): Supported 00:08:21.037 Set Features (09h): Supported 00:08:21.037 Get Features (0Ah): Supported 00:08:21.037 Asynchronous Event Request (0Ch): Supported 00:08:21.037 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.037 Directive Send (19h): Supported 00:08:21.037 Directive Receive (1Ah): Supported 00:08:21.037 Virtualization Management (1Ch): Supported 00:08:21.037 Doorbell Buffer Config (7Ch): Supported 00:08:21.037 Format NVM (80h): Supported LBA-Change 00:08:21.037 I/O Commands 00:08:21.037 ------------ 00:08:21.037 Flush (00h): Supported LBA-Change 00:08:21.038 Write (01h): Supported LBA-Change 00:08:21.038 Read (02h): Supported 00:08:21.038 Compare (05h): Supported 00:08:21.038 Write Zeroes (08h): Supported LBA-Change 00:08:21.038 Dataset Management (09h): Supported LBA-Change 00:08:21.038 Unknown (0Ch): Supported 00:08:21.038 Unknown (12h): Supported 00:08:21.038 Copy (19h): Supported LBA-Change 00:08:21.038 Unknown (1Dh): Supported LBA-Change 00:08:21.038 00:08:21.038 Error Log 00:08:21.038 ========= 00:08:21.038 00:08:21.038 Arbitration 00:08:21.038 =========== 00:08:21.038 Arbitration Burst: no limit 00:08:21.038 00:08:21.038 Power Management 00:08:21.038 ================ 00:08:21.038 Number of Power States: 1 00:08:21.038 Current Power State: Power State #0 00:08:21.038 Power State #0: 00:08:21.038 Max Power: 25.00 W 00:08:21.038 Non-Operational State: Operational 00:08:21.038 Entry Latency: 16 microseconds 00:08:21.038 Exit Latency: 4 microseconds 00:08:21.038 Relative Read Throughput: 0 00:08:21.038 Relative Read Latency: 0 00:08:21.038 Relative Write Throughput: 0 00:08:21.038 Relative Write Latency: 0 00:08:21.038 Idle Power: Not Reported 00:08:21.038 Active Power: Not Reported 00:08:21.038 Non-Operational Permissive Mode: Not Supported 00:08:21.038 00:08:21.038 Health Information 00:08:21.038 ================== 00:08:21.038 Critical Warnings: 00:08:21.038 Available Spare Space: OK 00:08:21.038 Temperature: OK 00:08:21.038 Device Reliability: OK 00:08:21.038 Read Only: No 00:08:21.038 Volatile Memory Backup: OK 00:08:21.038 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.038 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.038 Available Spare: 0% 00:08:21.038 Available Spare Threshold: 0% 00:08:21.038 Life Percentage Used: 0% 00:08:21.038 Data Units Read: 1091 00:08:21.038 Data Units Written: 964 00:08:21.038 Host Read Commands: 56889 00:08:21.038 Host Write Commands: 55784 00:08:21.038 Controller Busy Time: 0 minutes 00:08:21.038 Power Cycles: 0 00:08:21.038 Power On Hours: 0 hours 00:08:21.038 Unsafe Shutdowns: 0 00:08:21.038 Unrecoverable Media Errors: 0 00:08:21.038 Lifetime Error Log Entries: 0 00:08:21.038 Warning Temperature Time: 0 minutes 00:08:21.038 Critical Temperature Time: 0 minutes 00:08:21.038 00:08:21.038 Number of Queues 00:08:21.038 ================ 00:08:21.038 Number of I/O Submission Queues: 64 00:08:21.038 Number of I/O Completion Queues: 64 00:08:21.038 00:08:21.038 ZNS Specific Controller Data 00:08:21.038 ============================ 00:08:21.038 Zone Append Size Limit: 0 00:08:21.038 00:08:21.038 00:08:21.038 Active Namespaces 00:08:21.038 ================= 00:08:21.038 Namespace ID:1 00:08:21.038 Error Recovery Timeout: Unlimited 00:08:21.038 Command Set Identifier: NVM (00h) 00:08:21.038 Deallocate: Supported 00:08:21.038 Deallocated/Unwritten Error: Supported 00:08:21.038 Deallocated Read Value: All 0x00 00:08:21.038 Deallocate in Write Zeroes: Not Supported 00:08:21.038 Deallocated Guard Field: 0xFFFF 00:08:21.038 Flush: Supported 00:08:21.038 Reservation: Not Supported 00:08:21.038 Namespace Sharing Capabilities: Private 00:08:21.038 Size (in LBAs): 1310720 (5GiB) 00:08:21.038 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.038 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.038 Thin Provisioning: Not Supported 00:08:21.038 Per-NS Atomic Units: No 00:08:21.038 Maximum Single Source Range Length: 128 00:08:21.038 Maximum Copy Length: 128 00:08:21.038 Maximum Source Range Count: 128 00:08:21.038 NGUID/EUI64 Never Reused: No 00:08:21.038 Namespace Write Protected: No 00:08:21.038 Number of LBA Formats: 8 00:08:21.038 Current LBA Format: LBA Format #04 00:08:21.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.038 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.038 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.038 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.038 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.038 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.038 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.038 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.038 00:08:21.038 NVM Specific Namespace Data 00:08:21.038 =========================== 00:08:21.038 Logical Block Storage Tag Mask: 0 00:08:21.038 Protection Information Capabilities: 00:08:21.038 16b Guard Protection Information Storage Tag Support: No 00:08:21.038 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.038 Storage Tag Check Read Support: No 00:08:21.038 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.038 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.038 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:21.301 ===================================================== 00:08:21.301 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.301 ===================================================== 00:08:21.301 Controller Capabilities/Features 00:08:21.301 ================================ 00:08:21.301 Vendor ID: 1b36 00:08:21.301 Subsystem Vendor ID: 1af4 00:08:21.301 Serial Number: 12342 00:08:21.301 Model Number: QEMU NVMe Ctrl 00:08:21.301 Firmware Version: 8.0.0 00:08:21.301 Recommended Arb Burst: 6 00:08:21.301 IEEE OUI Identifier: 00 54 52 00:08:21.301 Multi-path I/O 00:08:21.301 May have multiple subsystem ports: No 00:08:21.301 May have multiple controllers: No 00:08:21.301 Associated with SR-IOV VF: No 00:08:21.301 Max Data Transfer Size: 524288 00:08:21.301 Max Number of Namespaces: 256 00:08:21.301 Max Number of I/O Queues: 64 00:08:21.301 NVMe Specification Version (VS): 1.4 00:08:21.301 NVMe Specification Version (Identify): 1.4 00:08:21.301 Maximum Queue Entries: 2048 00:08:21.301 Contiguous Queues Required: Yes 00:08:21.301 Arbitration Mechanisms Supported 00:08:21.301 Weighted Round Robin: Not Supported 00:08:21.301 Vendor Specific: Not Supported 00:08:21.301 Reset Timeout: 7500 ms 00:08:21.301 Doorbell Stride: 4 bytes 00:08:21.301 NVM Subsystem Reset: Not Supported 00:08:21.301 Command Sets Supported 00:08:21.301 NVM Command Set: Supported 00:08:21.301 Boot Partition: Not Supported 00:08:21.301 Memory Page Size Minimum: 4096 bytes 00:08:21.301 Memory Page Size Maximum: 65536 bytes 00:08:21.301 Persistent Memory Region: Not Supported 00:08:21.301 Optional Asynchronous Events Supported 00:08:21.301 Namespace Attribute Notices: Supported 00:08:21.301 Firmware Activation Notices: Not Supported 00:08:21.301 ANA Change Notices: Not Supported 00:08:21.301 PLE Aggregate Log Change Notices: Not Supported 00:08:21.301 LBA Status Info Alert Notices: Not Supported 00:08:21.301 EGE Aggregate Log Change Notices: Not Supported 00:08:21.301 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.301 Zone Descriptor Change Notices: Not Supported 00:08:21.301 Discovery Log Change Notices: Not Supported 00:08:21.301 Controller Attributes 00:08:21.301 128-bit Host Identifier: Not Supported 00:08:21.301 Non-Operational Permissive Mode: Not Supported 00:08:21.301 NVM Sets: Not Supported 00:08:21.301 Read Recovery Levels: Not Supported 00:08:21.301 Endurance Groups: Not Supported 00:08:21.301 Predictable Latency Mode: Not Supported 00:08:21.301 Traffic Based Keep ALive: Not Supported 00:08:21.301 Namespace Granularity: Not Supported 00:08:21.301 SQ Associations: Not Supported 00:08:21.301 UUID List: Not Supported 00:08:21.301 Multi-Domain Subsystem: Not Supported 00:08:21.301 Fixed Capacity Management: Not Supported 00:08:21.301 Variable Capacity Management: Not Supported 00:08:21.301 Delete Endurance Group: Not Supported 00:08:21.301 Delete NVM Set: Not Supported 00:08:21.301 Extended LBA Formats Supported: Supported 00:08:21.301 Flexible Data Placement Supported: Not Supported 00:08:21.301 00:08:21.301 Controller Memory Buffer Support 00:08:21.301 ================================ 00:08:21.301 Supported: No 00:08:21.301 00:08:21.301 Persistent Memory Region Support 00:08:21.301 ================================ 00:08:21.301 Supported: No 00:08:21.301 00:08:21.301 Admin Command Set Attributes 00:08:21.301 ============================ 00:08:21.301 Security Send/Receive: Not Supported 00:08:21.301 Format NVM: Supported 00:08:21.301 Firmware Activate/Download: Not Supported 00:08:21.301 Namespace Management: Supported 00:08:21.301 Device Self-Test: Not Supported 00:08:21.301 Directives: Supported 00:08:21.301 NVMe-MI: Not Supported 00:08:21.301 Virtualization Management: Not Supported 00:08:21.301 Doorbell Buffer Config: Supported 00:08:21.301 Get LBA Status Capability: Not Supported 00:08:21.301 Command & Feature Lockdown Capability: Not Supported 00:08:21.301 Abort Command Limit: 4 00:08:21.301 Async Event Request Limit: 4 00:08:21.301 Number of Firmware Slots: N/A 00:08:21.301 Firmware Slot 1 Read-Only: N/A 00:08:21.301 Firmware Activation Without Reset: N/A 00:08:21.301 Multiple Update Detection Support: N/A 00:08:21.301 Firmware Update Granularity: No Information Provided 00:08:21.301 Per-Namespace SMART Log: Yes 00:08:21.301 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.301 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:21.301 Command Effects Log Page: Supported 00:08:21.301 Get Log Page Extended Data: Supported 00:08:21.301 Telemetry Log Pages: Not Supported 00:08:21.301 Persistent Event Log Pages: Not Supported 00:08:21.301 Supported Log Pages Log Page: May Support 00:08:21.301 Commands Supported & Effects Log Page: Not Supported 00:08:21.301 Feature Identifiers & Effects Log Page:May Support 00:08:21.301 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.301 Data Area 4 for Telemetry Log: Not Supported 00:08:21.301 Error Log Page Entries Supported: 1 00:08:21.301 Keep Alive: Not Supported 00:08:21.301 00:08:21.301 NVM Command Set Attributes 00:08:21.301 ========================== 00:08:21.301 Submission Queue Entry Size 00:08:21.301 Max: 64 00:08:21.301 Min: 64 00:08:21.301 Completion Queue Entry Size 00:08:21.301 Max: 16 00:08:21.301 Min: 16 00:08:21.301 Number of Namespaces: 256 00:08:21.301 Compare Command: Supported 00:08:21.301 Write Uncorrectable Command: Not Supported 00:08:21.301 Dataset Management Command: Supported 00:08:21.301 Write Zeroes Command: Supported 00:08:21.301 Set Features Save Field: Supported 00:08:21.301 Reservations: Not Supported 00:08:21.302 Timestamp: Supported 00:08:21.302 Copy: Supported 00:08:21.302 Volatile Write Cache: Present 00:08:21.302 Atomic Write Unit (Normal): 1 00:08:21.302 Atomic Write Unit (PFail): 1 00:08:21.302 Atomic Compare & Write Unit: 1 00:08:21.302 Fused Compare & Write: Not Supported 00:08:21.302 Scatter-Gather List 00:08:21.302 SGL Command Set: Supported 00:08:21.302 SGL Keyed: Not Supported 00:08:21.302 SGL Bit Bucket Descriptor: Not Supported 00:08:21.302 SGL Metadata Pointer: Not Supported 00:08:21.302 Oversized SGL: Not Supported 00:08:21.302 SGL Metadata Address: Not Supported 00:08:21.302 SGL Offset: Not Supported 00:08:21.302 Transport SGL Data Block: Not Supported 00:08:21.302 Replay Protected Memory Block: Not Supported 00:08:21.302 00:08:21.302 Firmware Slot Information 00:08:21.302 ========================= 00:08:21.302 Active slot: 1 00:08:21.302 Slot 1 Firmware Revision: 1.0 00:08:21.302 00:08:21.302 00:08:21.302 Commands Supported and Effects 00:08:21.302 ============================== 00:08:21.302 Admin Commands 00:08:21.302 -------------- 00:08:21.302 Delete I/O Submission Queue (00h): Supported 00:08:21.302 Create I/O Submission Queue (01h): Supported 00:08:21.302 Get Log Page (02h): Supported 00:08:21.302 Delete I/O Completion Queue (04h): Supported 00:08:21.302 Create I/O Completion Queue (05h): Supported 00:08:21.302 Identify (06h): Supported 00:08:21.302 Abort (08h): Supported 00:08:21.302 Set Features (09h): Supported 00:08:21.302 Get Features (0Ah): Supported 00:08:21.302 Asynchronous Event Request (0Ch): Supported 00:08:21.302 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.302 Directive Send (19h): Supported 00:08:21.302 Directive Receive (1Ah): Supported 00:08:21.302 Virtualization Management (1Ch): Supported 00:08:21.302 Doorbell Buffer Config (7Ch): Supported 00:08:21.302 Format NVM (80h): Supported LBA-Change 00:08:21.302 I/O Commands 00:08:21.302 ------------ 00:08:21.302 Flush (00h): Supported LBA-Change 00:08:21.302 Write (01h): Supported LBA-Change 00:08:21.302 Read (02h): Supported 00:08:21.302 Compare (05h): Supported 00:08:21.302 Write Zeroes (08h): Supported LBA-Change 00:08:21.302 Dataset Management (09h): Supported LBA-Change 00:08:21.302 Unknown (0Ch): Supported 00:08:21.302 Unknown (12h): Supported 00:08:21.302 Copy (19h): Supported LBA-Change 00:08:21.302 Unknown (1Dh): Supported LBA-Change 00:08:21.302 00:08:21.302 Error Log 00:08:21.302 ========= 00:08:21.302 00:08:21.302 Arbitration 00:08:21.302 =========== 00:08:21.302 Arbitration Burst: no limit 00:08:21.302 00:08:21.302 Power Management 00:08:21.302 ================ 00:08:21.302 Number of Power States: 1 00:08:21.302 Current Power State: Power State #0 00:08:21.302 Power State #0: 00:08:21.302 Max Power: 25.00 W 00:08:21.302 Non-Operational State: Operational 00:08:21.302 Entry Latency: 16 microseconds 00:08:21.302 Exit Latency: 4 microseconds 00:08:21.302 Relative Read Throughput: 0 00:08:21.302 Relative Read Latency: 0 00:08:21.302 Relative Write Throughput: 0 00:08:21.302 Relative Write Latency: 0 00:08:21.302 Idle Power: Not Reported 00:08:21.302 Active Power: Not Reported 00:08:21.302 Non-Operational Permissive Mode: Not Supported 00:08:21.302 00:08:21.302 Health Information 00:08:21.302 ================== 00:08:21.302 Critical Warnings: 00:08:21.302 Available Spare Space: OK 00:08:21.302 Temperature: OK 00:08:21.302 Device Reliability: OK 00:08:21.302 Read Only: No 00:08:21.302 Volatile Memory Backup: OK 00:08:21.302 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.302 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.302 Available Spare: 0% 00:08:21.302 Available Spare Threshold: 0% 00:08:21.302 Life Percentage Used: 0% 00:08:21.302 Data Units Read: 2314 00:08:21.302 Data Units Written: 2101 00:08:21.302 Host Read Commands: 119478 00:08:21.302 Host Write Commands: 117747 00:08:21.302 Controller Busy Time: 0 minutes 00:08:21.302 Power Cycles: 0 00:08:21.302 Power On Hours: 0 hours 00:08:21.302 Unsafe Shutdowns: 0 00:08:21.302 Unrecoverable Media Errors: 0 00:08:21.302 Lifetime Error Log Entries: 0 00:08:21.302 Warning Temperature Time: 0 minutes 00:08:21.302 Critical Temperature Time: 0 minutes 00:08:21.302 00:08:21.302 Number of Queues 00:08:21.302 ================ 00:08:21.302 Number of I/O Submission Queues: 64 00:08:21.302 Number of I/O Completion Queues: 64 00:08:21.302 00:08:21.302 ZNS Specific Controller Data 00:08:21.302 ============================ 00:08:21.302 Zone Append Size Limit: 0 00:08:21.302 00:08:21.302 00:08:21.302 Active Namespaces 00:08:21.302 ================= 00:08:21.302 Namespace ID:1 00:08:21.302 Error Recovery Timeout: Unlimited 00:08:21.302 Command Set Identifier: NVM (00h) 00:08:21.302 Deallocate: Supported 00:08:21.302 Deallocated/Unwritten Error: Supported 00:08:21.302 Deallocated Read Value: All 0x00 00:08:21.302 Deallocate in Write Zeroes: Not Supported 00:08:21.302 Deallocated Guard Field: 0xFFFF 00:08:21.302 Flush: Supported 00:08:21.302 Reservation: Not Supported 00:08:21.302 Namespace Sharing Capabilities: Private 00:08:21.302 Size (in LBAs): 1048576 (4GiB) 00:08:21.302 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.302 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.302 Thin Provisioning: Not Supported 00:08:21.302 Per-NS Atomic Units: No 00:08:21.302 Maximum Single Source Range Length: 128 00:08:21.302 Maximum Copy Length: 128 00:08:21.302 Maximum Source Range Count: 128 00:08:21.302 NGUID/EUI64 Never Reused: No 00:08:21.302 Namespace Write Protected: No 00:08:21.302 Number of LBA Formats: 8 00:08:21.302 Current LBA Format: LBA Format #04 00:08:21.302 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.302 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.302 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.302 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.302 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.302 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.302 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.302 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.302 00:08:21.302 NVM Specific Namespace Data 00:08:21.302 =========================== 00:08:21.302 Logical Block Storage Tag Mask: 0 00:08:21.302 Protection Information Capabilities: 00:08:21.302 16b Guard Protection Information Storage Tag Support: No 00:08:21.302 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.302 Storage Tag Check Read Support: No 00:08:21.302 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.302 Namespace ID:2 00:08:21.302 Error Recovery Timeout: Unlimited 00:08:21.302 Command Set Identifier: NVM (00h) 00:08:21.302 Deallocate: Supported 00:08:21.302 Deallocated/Unwritten Error: Supported 00:08:21.302 Deallocated Read Value: All 0x00 00:08:21.302 Deallocate in Write Zeroes: Not Supported 00:08:21.302 Deallocated Guard Field: 0xFFFF 00:08:21.302 Flush: Supported 00:08:21.302 Reservation: Not Supported 00:08:21.302 Namespace Sharing Capabilities: Private 00:08:21.302 Size (in LBAs): 1048576 (4GiB) 00:08:21.302 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.302 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.302 Thin Provisioning: Not Supported 00:08:21.303 Per-NS Atomic Units: No 00:08:21.303 Maximum Single Source Range Length: 128 00:08:21.303 Maximum Copy Length: 128 00:08:21.303 Maximum Source Range Count: 128 00:08:21.303 NGUID/EUI64 Never Reused: No 00:08:21.303 Namespace Write Protected: No 00:08:21.303 Number of LBA Formats: 8 00:08:21.303 Current LBA Format: LBA Format #04 00:08:21.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.303 00:08:21.303 NVM Specific Namespace Data 00:08:21.303 =========================== 00:08:21.303 Logical Block Storage Tag Mask: 0 00:08:21.303 Protection Information Capabilities: 00:08:21.303 16b Guard Protection Information Storage Tag Support: No 00:08:21.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.303 Storage Tag Check Read Support: No 00:08:21.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Namespace ID:3 00:08:21.303 Error Recovery Timeout: Unlimited 00:08:21.303 Command Set Identifier: NVM (00h) 00:08:21.303 Deallocate: Supported 00:08:21.303 Deallocated/Unwritten Error: Supported 00:08:21.303 Deallocated Read Value: All 0x00 00:08:21.303 Deallocate in Write Zeroes: Not Supported 00:08:21.303 Deallocated Guard Field: 0xFFFF 00:08:21.303 Flush: Supported 00:08:21.303 Reservation: Not Supported 00:08:21.303 Namespace Sharing Capabilities: Private 00:08:21.303 Size (in LBAs): 1048576 (4GiB) 00:08:21.303 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.303 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.303 Thin Provisioning: Not Supported 00:08:21.303 Per-NS Atomic Units: No 00:08:21.303 Maximum Single Source Range Length: 128 00:08:21.303 Maximum Copy Length: 128 00:08:21.303 Maximum Source Range Count: 128 00:08:21.303 NGUID/EUI64 Never Reused: No 00:08:21.303 Namespace Write Protected: No 00:08:21.303 Number of LBA Formats: 8 00:08:21.303 Current LBA Format: LBA Format #04 00:08:21.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.303 00:08:21.303 NVM Specific Namespace Data 00:08:21.303 =========================== 00:08:21.303 Logical Block Storage Tag Mask: 0 00:08:21.303 Protection Information Capabilities: 00:08:21.303 16b Guard Protection Information Storage Tag Support: No 00:08:21.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.303 Storage Tag Check Read Support: No 00:08:21.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.303 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.303 22:04:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:21.565 ===================================================== 00:08:21.565 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.565 ===================================================== 00:08:21.565 Controller Capabilities/Features 00:08:21.565 ================================ 00:08:21.565 Vendor ID: 1b36 00:08:21.565 Subsystem Vendor ID: 1af4 00:08:21.565 Serial Number: 12343 00:08:21.565 Model Number: QEMU NVMe Ctrl 00:08:21.565 Firmware Version: 8.0.0 00:08:21.565 Recommended Arb Burst: 6 00:08:21.565 IEEE OUI Identifier: 00 54 52 00:08:21.565 Multi-path I/O 00:08:21.565 May have multiple subsystem ports: No 00:08:21.565 May have multiple controllers: Yes 00:08:21.565 Associated with SR-IOV VF: No 00:08:21.565 Max Data Transfer Size: 524288 00:08:21.565 Max Number of Namespaces: 256 00:08:21.565 Max Number of I/O Queues: 64 00:08:21.565 NVMe Specification Version (VS): 1.4 00:08:21.565 NVMe Specification Version (Identify): 1.4 00:08:21.565 Maximum Queue Entries: 2048 00:08:21.565 Contiguous Queues Required: Yes 00:08:21.565 Arbitration Mechanisms Supported 00:08:21.565 Weighted Round Robin: Not Supported 00:08:21.565 Vendor Specific: Not Supported 00:08:21.565 Reset Timeout: 7500 ms 00:08:21.565 Doorbell Stride: 4 bytes 00:08:21.565 NVM Subsystem Reset: Not Supported 00:08:21.565 Command Sets Supported 00:08:21.565 NVM Command Set: Supported 00:08:21.565 Boot Partition: Not Supported 00:08:21.565 Memory Page Size Minimum: 4096 bytes 00:08:21.565 Memory Page Size Maximum: 65536 bytes 00:08:21.565 Persistent Memory Region: Not Supported 00:08:21.565 Optional Asynchronous Events Supported 00:08:21.565 Namespace Attribute Notices: Supported 00:08:21.565 Firmware Activation Notices: Not Supported 00:08:21.565 ANA Change Notices: Not Supported 00:08:21.565 PLE Aggregate Log Change Notices: Not Supported 00:08:21.565 LBA Status Info Alert Notices: Not Supported 00:08:21.565 EGE Aggregate Log Change Notices: Not Supported 00:08:21.565 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.565 Zone Descriptor Change Notices: Not Supported 00:08:21.565 Discovery Log Change Notices: Not Supported 00:08:21.565 Controller Attributes 00:08:21.565 128-bit Host Identifier: Not Supported 00:08:21.565 Non-Operational Permissive Mode: Not Supported 00:08:21.565 NVM Sets: Not Supported 00:08:21.565 Read Recovery Levels: Not Supported 00:08:21.565 Endurance Groups: Supported 00:08:21.565 Predictable Latency Mode: Not Supported 00:08:21.565 Traffic Based Keep ALive: Not Supported 00:08:21.565 Namespace Granularity: Not Supported 00:08:21.565 SQ Associations: Not Supported 00:08:21.565 UUID List: Not Supported 00:08:21.565 Multi-Domain Subsystem: Not Supported 00:08:21.565 Fixed Capacity Management: Not Supported 00:08:21.565 Variable Capacity Management: Not Supported 00:08:21.565 Delete Endurance Group: Not Supported 00:08:21.565 Delete NVM Set: Not Supported 00:08:21.565 Extended LBA Formats Supported: Supported 00:08:21.565 Flexible Data Placement Supported: Supported 00:08:21.565 00:08:21.565 Controller Memory Buffer Support 00:08:21.565 ================================ 00:08:21.565 Supported: No 00:08:21.565 00:08:21.565 Persistent Memory Region Support 00:08:21.565 ================================ 00:08:21.565 Supported: No 00:08:21.565 00:08:21.565 Admin Command Set Attributes 00:08:21.565 ============================ 00:08:21.565 Security Send/Receive: Not Supported 00:08:21.565 Format NVM: Supported 00:08:21.565 Firmware Activate/Download: Not Supported 00:08:21.565 Namespace Management: Supported 00:08:21.565 Device Self-Test: Not Supported 00:08:21.565 Directives: Supported 00:08:21.565 NVMe-MI: Not Supported 00:08:21.565 Virtualization Management: Not Supported 00:08:21.565 Doorbell Buffer Config: Supported 00:08:21.565 Get LBA Status Capability: Not Supported 00:08:21.565 Command & Feature Lockdown Capability: Not Supported 00:08:21.565 Abort Command Limit: 4 00:08:21.565 Async Event Request Limit: 4 00:08:21.565 Number of Firmware Slots: N/A 00:08:21.565 Firmware Slot 1 Read-Only: N/A 00:08:21.565 Firmware Activation Without Reset: N/A 00:08:21.565 Multiple Update Detection Support: N/A 00:08:21.565 Firmware Update Granularity: No Information Provided 00:08:21.565 Per-Namespace SMART Log: Yes 00:08:21.565 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.565 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:21.565 Command Effects Log Page: Supported 00:08:21.565 Get Log Page Extended Data: Supported 00:08:21.565 Telemetry Log Pages: Not Supported 00:08:21.565 Persistent Event Log Pages: Not Supported 00:08:21.565 Supported Log Pages Log Page: May Support 00:08:21.565 Commands Supported & Effects Log Page: Not Supported 00:08:21.565 Feature Identifiers & Effects Log Page:May Support 00:08:21.565 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.565 Data Area 4 for Telemetry Log: Not Supported 00:08:21.565 Error Log Page Entries Supported: 1 00:08:21.565 Keep Alive: Not Supported 00:08:21.565 00:08:21.565 NVM Command Set Attributes 00:08:21.565 ========================== 00:08:21.565 Submission Queue Entry Size 00:08:21.565 Max: 64 00:08:21.565 Min: 64 00:08:21.565 Completion Queue Entry Size 00:08:21.565 Max: 16 00:08:21.565 Min: 16 00:08:21.565 Number of Namespaces: 256 00:08:21.565 Compare Command: Supported 00:08:21.565 Write Uncorrectable Command: Not Supported 00:08:21.565 Dataset Management Command: Supported 00:08:21.565 Write Zeroes Command: Supported 00:08:21.565 Set Features Save Field: Supported 00:08:21.565 Reservations: Not Supported 00:08:21.565 Timestamp: Supported 00:08:21.565 Copy: Supported 00:08:21.565 Volatile Write Cache: Present 00:08:21.565 Atomic Write Unit (Normal): 1 00:08:21.565 Atomic Write Unit (PFail): 1 00:08:21.565 Atomic Compare & Write Unit: 1 00:08:21.565 Fused Compare & Write: Not Supported 00:08:21.565 Scatter-Gather List 00:08:21.565 SGL Command Set: Supported 00:08:21.565 SGL Keyed: Not Supported 00:08:21.565 SGL Bit Bucket Descriptor: Not Supported 00:08:21.565 SGL Metadata Pointer: Not Supported 00:08:21.565 Oversized SGL: Not Supported 00:08:21.565 SGL Metadata Address: Not Supported 00:08:21.565 SGL Offset: Not Supported 00:08:21.565 Transport SGL Data Block: Not Supported 00:08:21.565 Replay Protected Memory Block: Not Supported 00:08:21.565 00:08:21.565 Firmware Slot Information 00:08:21.565 ========================= 00:08:21.565 Active slot: 1 00:08:21.565 Slot 1 Firmware Revision: 1.0 00:08:21.565 00:08:21.565 00:08:21.565 Commands Supported and Effects 00:08:21.565 ============================== 00:08:21.565 Admin Commands 00:08:21.565 -------------- 00:08:21.565 Delete I/O Submission Queue (00h): Supported 00:08:21.565 Create I/O Submission Queue (01h): Supported 00:08:21.565 Get Log Page (02h): Supported 00:08:21.566 Delete I/O Completion Queue (04h): Supported 00:08:21.566 Create I/O Completion Queue (05h): Supported 00:08:21.566 Identify (06h): Supported 00:08:21.566 Abort (08h): Supported 00:08:21.566 Set Features (09h): Supported 00:08:21.566 Get Features (0Ah): Supported 00:08:21.566 Asynchronous Event Request (0Ch): Supported 00:08:21.566 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.566 Directive Send (19h): Supported 00:08:21.566 Directive Receive (1Ah): Supported 00:08:21.566 Virtualization Management (1Ch): Supported 00:08:21.566 Doorbell Buffer Config (7Ch): Supported 00:08:21.566 Format NVM (80h): Supported LBA-Change 00:08:21.566 I/O Commands 00:08:21.566 ------------ 00:08:21.566 Flush (00h): Supported LBA-Change 00:08:21.566 Write (01h): Supported LBA-Change 00:08:21.566 Read (02h): Supported 00:08:21.566 Compare (05h): Supported 00:08:21.566 Write Zeroes (08h): Supported LBA-Change 00:08:21.566 Dataset Management (09h): Supported LBA-Change 00:08:21.566 Unknown (0Ch): Supported 00:08:21.566 Unknown (12h): Supported 00:08:21.566 Copy (19h): Supported LBA-Change 00:08:21.566 Unknown (1Dh): Supported LBA-Change 00:08:21.566 00:08:21.566 Error Log 00:08:21.566 ========= 00:08:21.566 00:08:21.566 Arbitration 00:08:21.566 =========== 00:08:21.566 Arbitration Burst: no limit 00:08:21.566 00:08:21.566 Power Management 00:08:21.566 ================ 00:08:21.566 Number of Power States: 1 00:08:21.566 Current Power State: Power State #0 00:08:21.566 Power State #0: 00:08:21.566 Max Power: 25.00 W 00:08:21.566 Non-Operational State: Operational 00:08:21.566 Entry Latency: 16 microseconds 00:08:21.566 Exit Latency: 4 microseconds 00:08:21.566 Relative Read Throughput: 0 00:08:21.566 Relative Read Latency: 0 00:08:21.566 Relative Write Throughput: 0 00:08:21.566 Relative Write Latency: 0 00:08:21.566 Idle Power: Not Reported 00:08:21.566 Active Power: Not Reported 00:08:21.566 Non-Operational Permissive Mode: Not Supported 00:08:21.566 00:08:21.566 Health Information 00:08:21.566 ================== 00:08:21.566 Critical Warnings: 00:08:21.566 Available Spare Space: OK 00:08:21.566 Temperature: OK 00:08:21.566 Device Reliability: OK 00:08:21.566 Read Only: No 00:08:21.566 Volatile Memory Backup: OK 00:08:21.566 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.566 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.566 Available Spare: 0% 00:08:21.566 Available Spare Threshold: 0% 00:08:21.566 Life Percentage Used: 0% 00:08:21.566 Data Units Read: 898 00:08:21.566 Data Units Written: 827 00:08:21.566 Host Read Commands: 40876 00:08:21.566 Host Write Commands: 40299 00:08:21.566 Controller Busy Time: 0 minutes 00:08:21.566 Power Cycles: 0 00:08:21.566 Power On Hours: 0 hours 00:08:21.566 Unsafe Shutdowns: 0 00:08:21.566 Unrecoverable Media Errors: 0 00:08:21.566 Lifetime Error Log Entries: 0 00:08:21.566 Warning Temperature Time: 0 minutes 00:08:21.566 Critical Temperature Time: 0 minutes 00:08:21.566 00:08:21.566 Number of Queues 00:08:21.566 ================ 00:08:21.566 Number of I/O Submission Queues: 64 00:08:21.566 Number of I/O Completion Queues: 64 00:08:21.566 00:08:21.566 ZNS Specific Controller Data 00:08:21.566 ============================ 00:08:21.566 Zone Append Size Limit: 0 00:08:21.566 00:08:21.566 00:08:21.566 Active Namespaces 00:08:21.566 ================= 00:08:21.566 Namespace ID:1 00:08:21.566 Error Recovery Timeout: Unlimited 00:08:21.566 Command Set Identifier: NVM (00h) 00:08:21.566 Deallocate: Supported 00:08:21.566 Deallocated/Unwritten Error: Supported 00:08:21.566 Deallocated Read Value: All 0x00 00:08:21.566 Deallocate in Write Zeroes: Not Supported 00:08:21.566 Deallocated Guard Field: 0xFFFF 00:08:21.566 Flush: Supported 00:08:21.566 Reservation: Not Supported 00:08:21.566 Namespace Sharing Capabilities: Multiple Controllers 00:08:21.566 Size (in LBAs): 262144 (1GiB) 00:08:21.566 Capacity (in LBAs): 262144 (1GiB) 00:08:21.566 Utilization (in LBAs): 262144 (1GiB) 00:08:21.566 Thin Provisioning: Not Supported 00:08:21.566 Per-NS Atomic Units: No 00:08:21.566 Maximum Single Source Range Length: 128 00:08:21.566 Maximum Copy Length: 128 00:08:21.566 Maximum Source Range Count: 128 00:08:21.566 NGUID/EUI64 Never Reused: No 00:08:21.566 Namespace Write Protected: No 00:08:21.566 Endurance group ID: 1 00:08:21.566 Number of LBA Formats: 8 00:08:21.566 Current LBA Format: LBA Format #04 00:08:21.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.566 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.566 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.566 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.566 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.566 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.566 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.566 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.566 00:08:21.566 Get Feature FDP: 00:08:21.566 ================ 00:08:21.566 Enabled: Yes 00:08:21.566 FDP configuration index: 0 00:08:21.566 00:08:21.566 FDP configurations log page 00:08:21.566 =========================== 00:08:21.566 Number of FDP configurations: 1 00:08:21.566 Version: 0 00:08:21.566 Size: 112 00:08:21.566 FDP Configuration Descriptor: 0 00:08:21.566 Descriptor Size: 96 00:08:21.566 Reclaim Group Identifier format: 2 00:08:21.566 FDP Volatile Write Cache: Not Present 00:08:21.566 FDP Configuration: Valid 00:08:21.566 Vendor Specific Size: 0 00:08:21.566 Number of Reclaim Groups: 2 00:08:21.566 Number of Recalim Unit Handles: 8 00:08:21.566 Max Placement Identifiers: 128 00:08:21.566 Number of Namespaces Suppprted: 256 00:08:21.566 Reclaim unit Nominal Size: 6000000 bytes 00:08:21.566 Estimated Reclaim Unit Time Limit: Not Reported 00:08:21.566 RUH Desc #000: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #001: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #002: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #003: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #004: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #005: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #006: RUH Type: Initially Isolated 00:08:21.566 RUH Desc #007: RUH Type: Initially Isolated 00:08:21.566 00:08:21.566 FDP reclaim unit handle usage log page 00:08:21.566 ====================================== 00:08:21.566 Number of Reclaim Unit Handles: 8 00:08:21.566 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:21.566 RUH Usage Desc #001: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #002: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #003: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #004: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #005: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #006: RUH Attributes: Unused 00:08:21.566 RUH Usage Desc #007: RUH Attributes: Unused 00:08:21.566 00:08:21.566 FDP statistics log page 00:08:21.566 ======================= 00:08:21.566 Host bytes with metadata written: 531931136 00:08:21.566 Media bytes with metadata written: 532021248 00:08:21.566 Media bytes erased: 0 00:08:21.566 00:08:21.566 FDP events log page 00:08:21.566 =================== 00:08:21.566 Number of FDP events: 0 00:08:21.566 00:08:21.566 NVM Specific Namespace Data 00:08:21.566 =========================== 00:08:21.566 Logical Block Storage Tag Mask: 0 00:08:21.566 Protection Information Capabilities: 00:08:21.566 16b Guard Protection Information Storage Tag Support: No 00:08:21.566 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.566 Storage Tag Check Read Support: No 00:08:21.566 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.567 00:08:21.567 real 0m1.104s 00:08:21.567 user 0m0.398s 00:08:21.567 sys 0m0.492s 00:08:21.567 22:04:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.567 22:04:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 ************************************ 00:08:21.567 END TEST nvme_identify 00:08:21.567 ************************************ 00:08:21.567 22:04:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:21.567 22:04:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.567 22:04:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.567 22:04:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.567 ************************************ 00:08:21.567 START TEST nvme_perf 00:08:21.567 ************************************ 00:08:21.567 22:04:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:21.567 22:04:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:22.959 Initializing NVMe Controllers 00:08:22.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:22.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:22.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:22.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:22.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:22.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:22.959 Initialization complete. Launching workers. 00:08:22.959 ======================================================== 00:08:22.959 Latency(us) 00:08:22.959 Device Information : IOPS MiB/s Average min max 00:08:22.959 PCIE (0000:00:10.0) NSID 1 from core 0: 16439.55 192.65 7788.54 4914.86 26651.18 00:08:22.959 PCIE (0000:00:11.0) NSID 1 from core 0: 16439.55 192.65 7782.58 4775.63 26593.42 00:08:22.959 PCIE (0000:00:13.0) NSID 1 from core 0: 16439.55 192.65 7776.31 4010.36 26772.64 00:08:22.959 PCIE (0000:00:12.0) NSID 1 from core 0: 16439.55 192.65 7769.77 3794.56 26899.55 00:08:22.959 PCIE (0000:00:12.0) NSID 2 from core 0: 16439.55 192.65 7763.28 3562.03 27017.03 00:08:22.959 PCIE (0000:00:12.0) NSID 3 from core 0: 16439.55 192.65 7756.90 3321.87 26830.26 00:08:22.959 ======================================================== 00:08:22.959 Total : 98637.30 1155.91 7772.90 3321.87 27017.03 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6553.600us 00:08:22.959 10.00000% : 7007.311us 00:08:22.959 25.00000% : 7259.372us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7864.320us 00:08:22.959 90.00000% : 8166.794us 00:08:22.959 95.00000% : 9880.812us 00:08:22.959 98.00000% : 12603.077us 00:08:22.959 99.00000% : 14619.569us 00:08:22.959 99.50000% : 17644.308us 00:08:22.959 99.90000% : 26214.400us 00:08:22.959 99.99000% : 26617.698us 00:08:22.959 99.99900% : 26819.348us 00:08:22.959 99.99990% : 26819.348us 00:08:22.959 99.99999% : 26819.348us 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6604.012us 00:08:22.959 10.00000% : 7057.723us 00:08:22.959 25.00000% : 7309.785us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7813.908us 00:08:22.959 90.00000% : 8116.382us 00:08:22.959 95.00000% : 9981.637us 00:08:22.959 98.00000% : 12855.138us 00:08:22.959 99.00000% : 13812.972us 00:08:22.959 99.50000% : 17946.782us 00:08:22.959 99.90000% : 26214.400us 00:08:22.959 99.99000% : 26617.698us 00:08:22.959 99.99900% : 26617.698us 00:08:22.959 99.99990% : 26617.698us 00:08:22.959 99.99999% : 26617.698us 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6553.600us 00:08:22.959 10.00000% : 7057.723us 00:08:22.959 25.00000% : 7309.785us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7813.908us 00:08:22.959 90.00000% : 8116.382us 00:08:22.959 95.00000% : 9628.751us 00:08:22.959 98.00000% : 12351.015us 00:08:22.959 99.00000% : 13510.498us 00:08:22.959 99.50000% : 18148.431us 00:08:22.959 99.90000% : 26416.049us 00:08:22.959 99.99000% : 26819.348us 00:08:22.959 99.99900% : 26819.348us 00:08:22.959 99.99990% : 26819.348us 00:08:22.959 99.99999% : 26819.348us 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6553.600us 00:08:22.959 10.00000% : 7057.723us 00:08:22.959 25.00000% : 7259.372us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7813.908us 00:08:22.959 90.00000% : 8116.382us 00:08:22.959 95.00000% : 9124.628us 00:08:22.959 98.00000% : 12250.191us 00:08:22.959 99.00000% : 13107.200us 00:08:22.959 99.50000% : 18249.255us 00:08:22.959 99.90000% : 26617.698us 00:08:22.959 99.99000% : 27020.997us 00:08:22.959 99.99900% : 27020.997us 00:08:22.959 99.99990% : 27020.997us 00:08:22.959 99.99999% : 27020.997us 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6553.600us 00:08:22.959 10.00000% : 7057.723us 00:08:22.959 25.00000% : 7309.785us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7813.908us 00:08:22.959 90.00000% : 8116.382us 00:08:22.959 95.00000% : 9074.215us 00:08:22.959 98.00000% : 12149.366us 00:08:22.959 99.00000% : 12754.314us 00:08:22.959 99.50000% : 18350.080us 00:08:22.959 99.90000% : 26617.698us 00:08:22.959 99.99000% : 27020.997us 00:08:22.959 99.99900% : 27020.997us 00:08:22.959 99.99990% : 27020.997us 00:08:22.959 99.99999% : 27020.997us 00:08:22.959 00:08:22.959 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.959 ================================================================================= 00:08:22.959 1.00000% : 6553.600us 00:08:22.959 10.00000% : 7057.723us 00:08:22.959 25.00000% : 7309.785us 00:08:22.959 50.00000% : 7561.846us 00:08:22.959 75.00000% : 7813.908us 00:08:22.959 90.00000% : 8065.969us 00:08:22.959 95.00000% : 9326.277us 00:08:22.959 98.00000% : 12098.954us 00:08:22.959 99.00000% : 13107.200us 00:08:22.959 99.50000% : 18551.729us 00:08:22.959 99.90000% : 26617.698us 00:08:22.959 99.99000% : 26819.348us 00:08:22.959 99.99900% : 27020.997us 00:08:22.959 99.99990% : 27020.997us 00:08:22.959 99.99999% : 27020.997us 00:08:22.959 00:08:22.959 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:22.959 ============================================================================== 00:08:22.959 Range in us Cumulative IO count 00:08:22.959 4889.994 - 4915.200: 0.0061% ( 1) 00:08:22.959 4915.200 - 4940.406: 0.0243% ( 3) 00:08:22.959 4940.406 - 4965.612: 0.0304% ( 1) 00:08:22.959 4965.612 - 4990.818: 0.0426% ( 2) 00:08:22.959 4990.818 - 5016.025: 0.0547% ( 2) 00:08:22.959 5016.025 - 5041.231: 0.0669% ( 2) 00:08:22.959 5041.231 - 5066.437: 0.0851% ( 3) 00:08:22.959 5066.437 - 5091.643: 0.0912% ( 1) 00:08:22.959 5091.643 - 5116.849: 0.1034% ( 2) 00:08:22.959 5116.849 - 5142.055: 0.1094% ( 1) 00:08:22.959 5142.055 - 5167.262: 0.1277% ( 3) 00:08:22.959 5167.262 - 5192.468: 0.1398% ( 2) 00:08:22.959 5192.468 - 5217.674: 0.1520% ( 2) 00:08:22.959 5217.674 - 5242.880: 0.1642% ( 2) 00:08:22.959 5242.880 - 5268.086: 0.1763% ( 2) 00:08:22.959 5268.086 - 5293.292: 0.1885% ( 2) 00:08:22.959 5293.292 - 5318.498: 0.2067% ( 3) 00:08:22.959 5318.498 - 5343.705: 0.2128% ( 1) 00:08:22.959 5343.705 - 5368.911: 0.2250% ( 2) 00:08:22.959 5368.911 - 5394.117: 0.2432% ( 3) 00:08:22.959 5394.117 - 5419.323: 0.2554% ( 2) 00:08:22.960 5419.323 - 5444.529: 0.2614% ( 1) 00:08:22.960 5444.529 - 5469.735: 0.2736% ( 2) 00:08:22.960 5469.735 - 5494.942: 0.2918% ( 3) 00:08:22.960 5494.942 - 5520.148: 0.3040% ( 2) 00:08:22.960 5520.148 - 5545.354: 0.3101% ( 1) 00:08:22.960 5545.354 - 5570.560: 0.3283% ( 3) 00:08:22.960 5570.560 - 5595.766: 0.3344% ( 1) 00:08:22.960 5595.766 - 5620.972: 0.3526% ( 3) 00:08:22.960 5620.972 - 5646.178: 0.3648% ( 2) 00:08:22.960 5646.178 - 5671.385: 0.3709% ( 1) 00:08:22.960 5671.385 - 5696.591: 0.3830% ( 2) 00:08:22.960 5696.591 - 5721.797: 0.3891% ( 1) 00:08:22.960 6125.095 - 6150.302: 0.4013% ( 2) 00:08:22.960 6150.302 - 6175.508: 0.4195% ( 3) 00:08:22.960 6175.508 - 6200.714: 0.4377% ( 3) 00:08:22.960 6225.920 - 6251.126: 0.4499% ( 2) 00:08:22.960 6251.126 - 6276.332: 0.4560% ( 1) 00:08:22.960 6276.332 - 6301.538: 0.4621% ( 1) 00:08:22.960 6301.538 - 6326.745: 0.4742% ( 2) 00:08:22.960 6326.745 - 6351.951: 0.4864% ( 2) 00:08:22.960 6351.951 - 6377.157: 0.4985% ( 2) 00:08:22.960 6377.157 - 6402.363: 0.5472% ( 8) 00:08:22.960 6402.363 - 6427.569: 0.6323% ( 14) 00:08:22.960 6427.569 - 6452.775: 0.7113% ( 13) 00:08:22.960 6452.775 - 6503.188: 0.9424% ( 38) 00:08:22.960 6503.188 - 6553.600: 1.1552% ( 35) 00:08:22.960 6553.600 - 6604.012: 1.5503% ( 65) 00:08:22.960 6604.012 - 6654.425: 2.0489% ( 82) 00:08:22.960 6654.425 - 6704.837: 2.6751% ( 103) 00:08:22.960 6704.837 - 6755.249: 3.3621% ( 113) 00:08:22.960 6755.249 - 6805.662: 4.1829% ( 135) 00:08:22.960 6805.662 - 6856.074: 5.2408% ( 174) 00:08:22.960 6856.074 - 6906.486: 6.5661% ( 218) 00:08:22.960 6906.486 - 6956.898: 8.2077% ( 270) 00:08:22.960 6956.898 - 7007.311: 10.3903% ( 359) 00:08:22.960 7007.311 - 7057.723: 13.0593% ( 439) 00:08:22.960 7057.723 - 7108.135: 16.0931% ( 499) 00:08:22.960 7108.135 - 7158.548: 19.5951% ( 576) 00:08:22.960 7158.548 - 7208.960: 23.3524% ( 618) 00:08:22.960 7208.960 - 7259.372: 27.6021% ( 699) 00:08:22.960 7259.372 - 7309.785: 31.8945% ( 706) 00:08:22.960 7309.785 - 7360.197: 36.1321% ( 697) 00:08:22.960 7360.197 - 7410.609: 40.5764% ( 731) 00:08:22.960 7410.609 - 7461.022: 44.8018% ( 695) 00:08:22.960 7461.022 - 7511.434: 49.1124% ( 709) 00:08:22.960 7511.434 - 7561.846: 53.4047% ( 706) 00:08:22.960 7561.846 - 7612.258: 57.5815% ( 687) 00:08:22.960 7612.258 - 7662.671: 61.6306% ( 666) 00:08:22.960 7662.671 - 7713.083: 65.7344% ( 675) 00:08:22.960 7713.083 - 7763.495: 69.6072% ( 637) 00:08:22.960 7763.495 - 7813.908: 73.3037% ( 608) 00:08:22.960 7813.908 - 7864.320: 77.0306% ( 613) 00:08:22.960 7864.320 - 7914.732: 80.3928% ( 553) 00:08:22.960 7914.732 - 7965.145: 83.3171% ( 481) 00:08:22.960 7965.145 - 8015.557: 85.8706% ( 420) 00:08:22.960 8015.557 - 8065.969: 88.0958% ( 366) 00:08:22.960 8065.969 - 8116.382: 89.6279% ( 252) 00:08:22.960 8116.382 - 8166.794: 90.7892% ( 191) 00:08:22.960 8166.794 - 8217.206: 91.5978% ( 133) 00:08:22.960 8217.206 - 8267.618: 92.1571% ( 92) 00:08:22.960 8267.618 - 8318.031: 92.6192% ( 76) 00:08:22.960 8318.031 - 8368.443: 92.9292% ( 51) 00:08:22.960 8368.443 - 8418.855: 93.1907% ( 43) 00:08:22.960 8418.855 - 8469.268: 93.3062% ( 19) 00:08:22.960 8469.268 - 8519.680: 93.4278% ( 20) 00:08:22.960 8519.680 - 8570.092: 93.5129% ( 14) 00:08:22.960 8570.092 - 8620.505: 93.5980% ( 14) 00:08:22.960 8620.505 - 8670.917: 93.6831% ( 14) 00:08:22.960 8670.917 - 8721.329: 93.7439% ( 10) 00:08:22.960 8721.329 - 8771.742: 93.7926% ( 8) 00:08:22.960 8771.742 - 8822.154: 93.8230% ( 5) 00:08:22.960 8822.154 - 8872.566: 93.8534% ( 5) 00:08:22.960 8872.566 - 8922.978: 93.8777% ( 4) 00:08:22.960 8922.978 - 8973.391: 93.8898% ( 2) 00:08:22.960 8973.391 - 9023.803: 93.9081% ( 3) 00:08:22.960 9023.803 - 9074.215: 93.9446% ( 6) 00:08:22.960 9074.215 - 9124.628: 93.9750% ( 5) 00:08:22.960 9124.628 - 9175.040: 94.0236% ( 8) 00:08:22.960 9175.040 - 9225.452: 94.0844% ( 10) 00:08:22.960 9225.452 - 9275.865: 94.1330% ( 8) 00:08:22.960 9275.865 - 9326.277: 94.1999% ( 11) 00:08:22.960 9326.277 - 9376.689: 94.2485% ( 8) 00:08:22.960 9376.689 - 9427.102: 94.3154% ( 11) 00:08:22.960 9427.102 - 9477.514: 94.3823% ( 11) 00:08:22.960 9477.514 - 9527.926: 94.4431% ( 10) 00:08:22.960 9527.926 - 9578.338: 94.5221% ( 13) 00:08:22.960 9578.338 - 9628.751: 94.6012% ( 13) 00:08:22.960 9628.751 - 9679.163: 94.6802% ( 13) 00:08:22.960 9679.163 - 9729.575: 94.7653% ( 14) 00:08:22.960 9729.575 - 9779.988: 94.8444% ( 13) 00:08:22.960 9779.988 - 9830.400: 94.9599% ( 19) 00:08:22.960 9830.400 - 9880.812: 95.0571% ( 16) 00:08:22.960 9880.812 - 9931.225: 95.1423% ( 14) 00:08:22.960 9931.225 - 9981.637: 95.2274% ( 14) 00:08:22.960 9981.637 - 10032.049: 95.3307% ( 17) 00:08:22.960 10032.049 - 10082.462: 95.4523% ( 20) 00:08:22.960 10082.462 - 10132.874: 95.5679% ( 19) 00:08:22.960 10132.874 - 10183.286: 95.6530% ( 14) 00:08:22.960 10183.286 - 10233.698: 95.7320% ( 13) 00:08:22.960 10233.698 - 10284.111: 95.8232% ( 15) 00:08:22.960 10284.111 - 10334.523: 95.9022% ( 13) 00:08:22.960 10334.523 - 10384.935: 95.9874% ( 14) 00:08:22.960 10384.935 - 10435.348: 96.0725% ( 14) 00:08:22.960 10435.348 - 10485.760: 96.1454% ( 12) 00:08:22.960 10485.760 - 10536.172: 96.2366% ( 15) 00:08:22.960 10536.172 - 10586.585: 96.2974% ( 10) 00:08:22.960 10586.585 - 10636.997: 96.3582% ( 10) 00:08:22.960 10636.997 - 10687.409: 96.4190% ( 10) 00:08:22.960 10687.409 - 10737.822: 96.4737% ( 9) 00:08:22.960 10737.822 - 10788.234: 96.5224% ( 8) 00:08:22.960 10788.234 - 10838.646: 96.5893% ( 11) 00:08:22.960 10838.646 - 10889.058: 96.6318% ( 7) 00:08:22.960 10889.058 - 10939.471: 96.7108% ( 13) 00:08:22.960 10939.471 - 10989.883: 96.7595% ( 8) 00:08:22.960 10989.883 - 11040.295: 96.8203% ( 10) 00:08:22.960 11040.295 - 11090.708: 96.8872% ( 11) 00:08:22.960 11090.708 - 11141.120: 96.9480% ( 10) 00:08:22.960 11141.120 - 11191.532: 96.9966% ( 8) 00:08:22.960 11191.532 - 11241.945: 97.0452% ( 8) 00:08:22.960 11241.945 - 11292.357: 97.0939% ( 8) 00:08:22.960 11292.357 - 11342.769: 97.1425% ( 8) 00:08:22.960 11342.769 - 11393.182: 97.1972% ( 9) 00:08:22.960 11393.182 - 11443.594: 97.2398% ( 7) 00:08:22.960 11443.594 - 11494.006: 97.2823% ( 7) 00:08:22.960 11494.006 - 11544.418: 97.3310% ( 8) 00:08:22.960 11544.418 - 11594.831: 97.3735% ( 7) 00:08:22.960 11594.831 - 11645.243: 97.4283% ( 9) 00:08:22.960 11645.243 - 11695.655: 97.4830% ( 9) 00:08:22.960 11695.655 - 11746.068: 97.5195% ( 6) 00:08:22.960 11746.068 - 11796.480: 97.5681% ( 8) 00:08:22.960 11796.480 - 11846.892: 97.6107% ( 7) 00:08:22.960 11846.892 - 11897.305: 97.6471% ( 6) 00:08:22.960 11897.305 - 11947.717: 97.6654% ( 3) 00:08:22.960 12048.542 - 12098.954: 97.6958% ( 5) 00:08:22.960 12098.954 - 12149.366: 97.7140% ( 3) 00:08:22.960 12149.366 - 12199.778: 97.7322% ( 3) 00:08:22.960 12199.778 - 12250.191: 97.7566% ( 4) 00:08:22.960 12250.191 - 12300.603: 97.7930% ( 6) 00:08:22.960 12300.603 - 12351.015: 97.8295% ( 6) 00:08:22.960 12351.015 - 12401.428: 97.8599% ( 5) 00:08:22.960 12401.428 - 12451.840: 97.9025% ( 7) 00:08:22.960 12451.840 - 12502.252: 97.9329% ( 5) 00:08:22.960 12502.252 - 12552.665: 97.9694% ( 6) 00:08:22.960 12552.665 - 12603.077: 98.0119% ( 7) 00:08:22.960 12603.077 - 12653.489: 98.0423% ( 5) 00:08:22.960 12653.489 - 12703.902: 98.0788% ( 6) 00:08:22.960 12703.902 - 12754.314: 98.1214% ( 7) 00:08:22.960 12754.314 - 12804.726: 98.1639% ( 7) 00:08:22.960 12804.726 - 12855.138: 98.2308% ( 11) 00:08:22.960 12855.138 - 12905.551: 98.2855% ( 9) 00:08:22.960 12905.551 - 13006.375: 98.3828% ( 16) 00:08:22.960 13006.375 - 13107.200: 98.4861% ( 17) 00:08:22.960 13107.200 - 13208.025: 98.5773% ( 15) 00:08:22.960 13208.025 - 13308.849: 98.6381% ( 10) 00:08:22.960 13308.849 - 13409.674: 98.6746% ( 6) 00:08:22.960 13409.674 - 13510.498: 98.7111% ( 6) 00:08:22.960 13510.498 - 13611.323: 98.7415% ( 5) 00:08:22.960 13611.323 - 13712.148: 98.7840% ( 7) 00:08:22.960 13712.148 - 13812.972: 98.8205% ( 6) 00:08:22.960 13812.972 - 13913.797: 98.8327% ( 2) 00:08:22.960 14014.622 - 14115.446: 98.8448% ( 2) 00:08:22.960 14115.446 - 14216.271: 98.8813% ( 6) 00:08:22.960 14216.271 - 14317.095: 98.9178% ( 6) 00:08:22.960 14317.095 - 14417.920: 98.9604% ( 7) 00:08:22.960 14417.920 - 14518.745: 98.9908% ( 5) 00:08:22.960 14518.745 - 14619.569: 99.0333% ( 7) 00:08:22.960 14619.569 - 14720.394: 99.0698% ( 6) 00:08:22.960 14720.394 - 14821.218: 99.1063% ( 6) 00:08:22.960 14821.218 - 14922.043: 99.1428% ( 6) 00:08:22.960 14922.043 - 15022.868: 99.1792% ( 6) 00:08:22.960 15022.868 - 15123.692: 99.2157% ( 6) 00:08:22.960 15123.692 - 15224.517: 99.2218% ( 1) 00:08:22.961 16535.237 - 16636.062: 99.2400% ( 3) 00:08:22.961 16636.062 - 16736.886: 99.2765% ( 6) 00:08:22.961 16736.886 - 16837.711: 99.3191% ( 7) 00:08:22.961 16837.711 - 16938.535: 99.3434% ( 4) 00:08:22.961 17039.360 - 17140.185: 99.3495% ( 1) 00:08:22.961 17140.185 - 17241.009: 99.3859% ( 6) 00:08:22.961 17241.009 - 17341.834: 99.4224% ( 6) 00:08:22.961 17341.834 - 17442.658: 99.4650% ( 7) 00:08:22.961 17442.658 - 17543.483: 99.4954% ( 5) 00:08:22.961 17543.483 - 17644.308: 99.5319% ( 6) 00:08:22.961 17644.308 - 17745.132: 99.5744% ( 7) 00:08:22.961 17745.132 - 17845.957: 99.6109% ( 6) 00:08:22.961 24601.206 - 24702.031: 99.6170% ( 1) 00:08:22.961 24702.031 - 24802.855: 99.6352% ( 3) 00:08:22.961 24802.855 - 24903.680: 99.6595% ( 4) 00:08:22.961 24903.680 - 25004.505: 99.6778% ( 3) 00:08:22.961 25004.505 - 25105.329: 99.6960% ( 3) 00:08:22.961 25105.329 - 25206.154: 99.7203% ( 4) 00:08:22.961 25206.154 - 25306.978: 99.7386% ( 3) 00:08:22.961 25306.978 - 25407.803: 99.7568% ( 3) 00:08:22.961 25407.803 - 25508.628: 99.7811% ( 4) 00:08:22.961 25508.628 - 25609.452: 99.7933% ( 2) 00:08:22.961 25609.452 - 25710.277: 99.8115% ( 3) 00:08:22.961 25710.277 - 25811.102: 99.8298% ( 3) 00:08:22.961 25811.102 - 26012.751: 99.8723% ( 7) 00:08:22.961 26012.751 - 26214.400: 99.9149% ( 7) 00:08:22.961 26214.400 - 26416.049: 99.9514% ( 6) 00:08:22.961 26416.049 - 26617.698: 99.9939% ( 7) 00:08:22.961 26617.698 - 26819.348: 100.0000% ( 1) 00:08:22.961 00:08:22.961 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:22.961 ============================================================================== 00:08:22.961 Range in us Cumulative IO count 00:08:22.961 4763.963 - 4789.169: 0.0365% ( 6) 00:08:22.961 4789.169 - 4814.375: 0.0486% ( 2) 00:08:22.961 4814.375 - 4839.582: 0.0608% ( 2) 00:08:22.961 4839.582 - 4864.788: 0.0669% ( 1) 00:08:22.961 4864.788 - 4889.994: 0.0851% ( 3) 00:08:22.961 4889.994 - 4915.200: 0.0973% ( 2) 00:08:22.961 4915.200 - 4940.406: 0.1094% ( 2) 00:08:22.961 4940.406 - 4965.612: 0.1277% ( 3) 00:08:22.961 4965.612 - 4990.818: 0.1398% ( 2) 00:08:22.961 4990.818 - 5016.025: 0.1581% ( 3) 00:08:22.961 5016.025 - 5041.231: 0.1702% ( 2) 00:08:22.961 5041.231 - 5066.437: 0.1824% ( 2) 00:08:22.961 5066.437 - 5091.643: 0.2006% ( 3) 00:08:22.961 5091.643 - 5116.849: 0.2128% ( 2) 00:08:22.961 5116.849 - 5142.055: 0.2250% ( 2) 00:08:22.961 5142.055 - 5167.262: 0.2432% ( 3) 00:08:22.961 5167.262 - 5192.468: 0.2554% ( 2) 00:08:22.961 5192.468 - 5217.674: 0.2736% ( 3) 00:08:22.961 5217.674 - 5242.880: 0.2857% ( 2) 00:08:22.961 5242.880 - 5268.086: 0.2979% ( 2) 00:08:22.961 5268.086 - 5293.292: 0.3161% ( 3) 00:08:22.961 5293.292 - 5318.498: 0.3283% ( 2) 00:08:22.961 5318.498 - 5343.705: 0.3465% ( 3) 00:08:22.961 5343.705 - 5368.911: 0.3587% ( 2) 00:08:22.961 5368.911 - 5394.117: 0.3769% ( 3) 00:08:22.961 5394.117 - 5419.323: 0.3891% ( 2) 00:08:22.961 6175.508 - 6200.714: 0.4073% ( 3) 00:08:22.961 6200.714 - 6225.920: 0.4256% ( 3) 00:08:22.961 6225.920 - 6251.126: 0.4377% ( 2) 00:08:22.961 6251.126 - 6276.332: 0.4438% ( 1) 00:08:22.961 6276.332 - 6301.538: 0.4499% ( 1) 00:08:22.961 6301.538 - 6326.745: 0.4681% ( 3) 00:08:22.961 6326.745 - 6351.951: 0.4803% ( 2) 00:08:22.961 6351.951 - 6377.157: 0.4925% ( 2) 00:08:22.961 6377.157 - 6402.363: 0.5046% ( 2) 00:08:22.961 6402.363 - 6427.569: 0.5168% ( 2) 00:08:22.961 6427.569 - 6452.775: 0.5411% ( 4) 00:08:22.961 6452.775 - 6503.188: 0.6688% ( 21) 00:08:22.961 6503.188 - 6553.600: 0.8816% ( 35) 00:08:22.961 6553.600 - 6604.012: 1.0700% ( 31) 00:08:22.961 6604.012 - 6654.425: 1.4956% ( 70) 00:08:22.961 6654.425 - 6704.837: 2.0246% ( 87) 00:08:22.961 6704.837 - 6755.249: 2.7663% ( 122) 00:08:22.961 6755.249 - 6805.662: 3.4715% ( 116) 00:08:22.961 6805.662 - 6856.074: 4.4200% ( 156) 00:08:22.961 6856.074 - 6906.486: 5.4718% ( 173) 00:08:22.961 6906.486 - 6956.898: 6.6877% ( 200) 00:08:22.961 6956.898 - 7007.311: 8.4387% ( 288) 00:08:22.961 7007.311 - 7057.723: 10.4572% ( 332) 00:08:22.961 7057.723 - 7108.135: 13.2721% ( 463) 00:08:22.961 7108.135 - 7158.548: 16.4883% ( 529) 00:08:22.961 7158.548 - 7208.960: 20.3490% ( 635) 00:08:22.961 7208.960 - 7259.372: 24.8237% ( 736) 00:08:22.961 7259.372 - 7309.785: 29.5477% ( 777) 00:08:22.961 7309.785 - 7360.197: 34.5027% ( 815) 00:08:22.961 7360.197 - 7410.609: 39.5246% ( 826) 00:08:22.961 7410.609 - 7461.022: 44.5647% ( 829) 00:08:22.961 7461.022 - 7511.434: 49.5319% ( 817) 00:08:22.961 7511.434 - 7561.846: 54.3166% ( 787) 00:08:22.961 7561.846 - 7612.258: 59.1683% ( 798) 00:08:22.961 7612.258 - 7662.671: 63.8619% ( 772) 00:08:22.961 7662.671 - 7713.083: 68.3852% ( 744) 00:08:22.961 7713.083 - 7763.495: 72.7748% ( 722) 00:08:22.961 7763.495 - 7813.908: 76.8300% ( 667) 00:08:22.961 7813.908 - 7864.320: 80.5265% ( 608) 00:08:22.961 7864.320 - 7914.732: 83.6758% ( 518) 00:08:22.961 7914.732 - 7965.145: 86.2536% ( 424) 00:08:22.961 7965.145 - 8015.557: 88.4484% ( 361) 00:08:22.961 8015.557 - 8065.969: 89.9805% ( 252) 00:08:22.961 8065.969 - 8116.382: 91.0384% ( 174) 00:08:22.961 8116.382 - 8166.794: 91.8106% ( 127) 00:08:22.961 8166.794 - 8217.206: 92.4064% ( 98) 00:08:22.961 8217.206 - 8267.618: 92.7955% ( 64) 00:08:22.961 8267.618 - 8318.031: 93.1177% ( 53) 00:08:22.961 8318.031 - 8368.443: 93.2940% ( 29) 00:08:22.961 8368.443 - 8418.855: 93.4460% ( 25) 00:08:22.961 8418.855 - 8469.268: 93.5676% ( 20) 00:08:22.961 8469.268 - 8519.680: 93.6649% ( 16) 00:08:22.961 8519.680 - 8570.092: 93.7378% ( 12) 00:08:22.961 8570.092 - 8620.505: 93.8047% ( 11) 00:08:22.961 8620.505 - 8670.917: 93.8655% ( 10) 00:08:22.961 8670.917 - 8721.329: 93.9263% ( 10) 00:08:22.961 8721.329 - 8771.742: 93.9628% ( 6) 00:08:22.961 8771.742 - 8822.154: 93.9871% ( 4) 00:08:22.961 8822.154 - 8872.566: 94.0114% ( 4) 00:08:22.961 8872.566 - 8922.978: 94.0297% ( 3) 00:08:22.961 8922.978 - 8973.391: 94.0540% ( 4) 00:08:22.961 8973.391 - 9023.803: 94.0722% ( 3) 00:08:22.961 9023.803 - 9074.215: 94.0965% ( 4) 00:08:22.961 9074.215 - 9124.628: 94.1209% ( 4) 00:08:22.961 9124.628 - 9175.040: 94.1391% ( 3) 00:08:22.961 9175.040 - 9225.452: 94.1634% ( 4) 00:08:22.961 9376.689 - 9427.102: 94.2121% ( 8) 00:08:22.961 9427.102 - 9477.514: 94.2364% ( 4) 00:08:22.961 9477.514 - 9527.926: 94.2729% ( 6) 00:08:22.961 9527.926 - 9578.338: 94.3093% ( 6) 00:08:22.961 9578.338 - 9628.751: 94.3458% ( 6) 00:08:22.961 9628.751 - 9679.163: 94.4127% ( 11) 00:08:22.961 9679.163 - 9729.575: 94.4857% ( 12) 00:08:22.961 9729.575 - 9779.988: 94.5708% ( 14) 00:08:22.961 9779.988 - 9830.400: 94.6863% ( 19) 00:08:22.961 9830.400 - 9880.812: 94.7896% ( 17) 00:08:22.961 9880.812 - 9931.225: 94.8991% ( 18) 00:08:22.961 9931.225 - 9981.637: 95.0024% ( 17) 00:08:22.961 9981.637 - 10032.049: 95.0875% ( 14) 00:08:22.961 10032.049 - 10082.462: 95.2152% ( 21) 00:08:22.961 10082.462 - 10132.874: 95.3429% ( 21) 00:08:22.961 10132.874 - 10183.286: 95.4584% ( 19) 00:08:22.961 10183.286 - 10233.698: 95.5800% ( 20) 00:08:22.961 10233.698 - 10284.111: 95.7077% ( 21) 00:08:22.961 10284.111 - 10334.523: 95.8232% ( 19) 00:08:22.961 10334.523 - 10384.935: 95.9509% ( 21) 00:08:22.961 10384.935 - 10435.348: 96.0786% ( 21) 00:08:22.961 10435.348 - 10485.760: 96.2123% ( 22) 00:08:22.961 10485.760 - 10536.172: 96.3521% ( 23) 00:08:22.962 10536.172 - 10586.585: 96.4494% ( 16) 00:08:22.962 10586.585 - 10636.997: 96.5528% ( 17) 00:08:22.962 10636.997 - 10687.409: 96.6257% ( 12) 00:08:22.962 10687.409 - 10737.822: 96.6987% ( 12) 00:08:22.962 10737.822 - 10788.234: 96.7838% ( 14) 00:08:22.962 10788.234 - 10838.646: 96.8689% ( 14) 00:08:22.962 10838.646 - 10889.058: 96.9480% ( 13) 00:08:22.962 10889.058 - 10939.471: 97.0027% ( 9) 00:08:22.962 10939.471 - 10989.883: 97.0513% ( 8) 00:08:22.962 10989.883 - 11040.295: 97.1182% ( 11) 00:08:22.962 11040.295 - 11090.708: 97.1729% ( 9) 00:08:22.962 11090.708 - 11141.120: 97.2459% ( 12) 00:08:22.962 11141.120 - 11191.532: 97.3006% ( 9) 00:08:22.962 11191.532 - 11241.945: 97.3431% ( 7) 00:08:22.962 11241.945 - 11292.357: 97.3918% ( 8) 00:08:22.962 11292.357 - 11342.769: 97.4283% ( 6) 00:08:22.962 11342.769 - 11393.182: 97.4526% ( 4) 00:08:22.962 11393.182 - 11443.594: 97.4769% ( 4) 00:08:22.962 11443.594 - 11494.006: 97.5012% ( 4) 00:08:22.962 11494.006 - 11544.418: 97.5195% ( 3) 00:08:22.962 11544.418 - 11594.831: 97.5438% ( 4) 00:08:22.962 11594.831 - 11645.243: 97.5681% ( 4) 00:08:22.962 11645.243 - 11695.655: 97.5924% ( 4) 00:08:22.962 11695.655 - 11746.068: 97.6167% ( 4) 00:08:22.962 11746.068 - 11796.480: 97.6411% ( 4) 00:08:22.962 11796.480 - 11846.892: 97.6654% ( 4) 00:08:22.962 12199.778 - 12250.191: 97.6775% ( 2) 00:08:22.962 12250.191 - 12300.603: 97.7018% ( 4) 00:08:22.962 12300.603 - 12351.015: 97.7262% ( 4) 00:08:22.962 12351.015 - 12401.428: 97.7505% ( 4) 00:08:22.962 12401.428 - 12451.840: 97.7626% ( 2) 00:08:22.962 12451.840 - 12502.252: 97.7870% ( 4) 00:08:22.962 12502.252 - 12552.665: 97.8052% ( 3) 00:08:22.962 12552.665 - 12603.077: 97.8356% ( 5) 00:08:22.962 12603.077 - 12653.489: 97.8721% ( 6) 00:08:22.962 12653.489 - 12703.902: 97.9086% ( 6) 00:08:22.962 12703.902 - 12754.314: 97.9572% ( 8) 00:08:22.962 12754.314 - 12804.726: 97.9998% ( 7) 00:08:22.962 12804.726 - 12855.138: 98.0423% ( 7) 00:08:22.962 12855.138 - 12905.551: 98.1031% ( 10) 00:08:22.962 12905.551 - 13006.375: 98.2490% ( 24) 00:08:22.962 13006.375 - 13107.200: 98.4010% ( 25) 00:08:22.962 13107.200 - 13208.025: 98.5348% ( 22) 00:08:22.962 13208.025 - 13308.849: 98.6442% ( 18) 00:08:22.962 13308.849 - 13409.674: 98.7536% ( 18) 00:08:22.962 13409.674 - 13510.498: 98.8509% ( 16) 00:08:22.962 13510.498 - 13611.323: 98.9178% ( 11) 00:08:22.962 13611.323 - 13712.148: 98.9786% ( 10) 00:08:22.962 13712.148 - 13812.972: 99.0394% ( 10) 00:08:22.962 13812.972 - 13913.797: 99.0941% ( 9) 00:08:22.962 13913.797 - 14014.622: 99.1184% ( 4) 00:08:22.962 14014.622 - 14115.446: 99.1428% ( 4) 00:08:22.962 14115.446 - 14216.271: 99.1671% ( 4) 00:08:22.962 14216.271 - 14317.095: 99.1914% ( 4) 00:08:22.962 14317.095 - 14417.920: 99.2157% ( 4) 00:08:22.962 14417.920 - 14518.745: 99.2218% ( 1) 00:08:22.962 16636.062 - 16736.886: 99.2279% ( 1) 00:08:22.962 16736.886 - 16837.711: 99.2522% ( 4) 00:08:22.962 16837.711 - 16938.535: 99.2704% ( 3) 00:08:22.962 16938.535 - 17039.360: 99.2947% ( 4) 00:08:22.962 17039.360 - 17140.185: 99.3191% ( 4) 00:08:22.962 17140.185 - 17241.009: 99.3434% ( 4) 00:08:22.962 17241.009 - 17341.834: 99.3677% ( 4) 00:08:22.962 17341.834 - 17442.658: 99.3920% ( 4) 00:08:22.962 17442.658 - 17543.483: 99.4163% ( 4) 00:08:22.962 17543.483 - 17644.308: 99.4346% ( 3) 00:08:22.962 17644.308 - 17745.132: 99.4528% ( 3) 00:08:22.962 17745.132 - 17845.957: 99.4771% ( 4) 00:08:22.962 17845.957 - 17946.782: 99.5015% ( 4) 00:08:22.962 17946.782 - 18047.606: 99.5197% ( 3) 00:08:22.962 18047.606 - 18148.431: 99.5440% ( 4) 00:08:22.962 18148.431 - 18249.255: 99.5683% ( 4) 00:08:22.962 18249.255 - 18350.080: 99.5866% ( 3) 00:08:22.962 18350.080 - 18450.905: 99.6109% ( 4) 00:08:22.962 24903.680 - 25004.505: 99.6352% ( 4) 00:08:22.962 25004.505 - 25105.329: 99.6595% ( 4) 00:08:22.962 25105.329 - 25206.154: 99.6778% ( 3) 00:08:22.962 25206.154 - 25306.978: 99.7021% ( 4) 00:08:22.962 25306.978 - 25407.803: 99.7264% ( 4) 00:08:22.962 25407.803 - 25508.628: 99.7507% ( 4) 00:08:22.962 25508.628 - 25609.452: 99.7690% ( 3) 00:08:22.962 25609.452 - 25710.277: 99.7933% ( 4) 00:08:22.962 25710.277 - 25811.102: 99.8176% ( 4) 00:08:22.962 25811.102 - 26012.751: 99.8662% ( 8) 00:08:22.962 26012.751 - 26214.400: 99.9088% ( 7) 00:08:22.962 26214.400 - 26416.049: 99.9574% ( 8) 00:08:22.962 26416.049 - 26617.698: 100.0000% ( 7) 00:08:22.962 00:08:22.962 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:22.962 ============================================================================== 00:08:22.962 Range in us Cumulative IO count 00:08:22.962 4007.778 - 4032.985: 0.0182% ( 3) 00:08:22.962 4032.985 - 4058.191: 0.0304% ( 2) 00:08:22.962 4058.191 - 4083.397: 0.0486% ( 3) 00:08:22.962 4083.397 - 4108.603: 0.0608% ( 2) 00:08:22.962 4108.603 - 4133.809: 0.0730% ( 2) 00:08:22.962 4133.809 - 4159.015: 0.0851% ( 2) 00:08:22.962 4159.015 - 4184.222: 0.0973% ( 2) 00:08:22.962 4184.222 - 4209.428: 0.1155% ( 3) 00:08:22.962 4209.428 - 4234.634: 0.1277% ( 2) 00:08:22.962 4234.634 - 4259.840: 0.1459% ( 3) 00:08:22.962 4259.840 - 4285.046: 0.1581% ( 2) 00:08:22.962 4285.046 - 4310.252: 0.1702% ( 2) 00:08:22.962 4310.252 - 4335.458: 0.1885% ( 3) 00:08:22.962 4335.458 - 4360.665: 0.2006% ( 2) 00:08:22.962 4360.665 - 4385.871: 0.2128% ( 2) 00:08:22.962 4385.871 - 4411.077: 0.2310% ( 3) 00:08:22.962 4411.077 - 4436.283: 0.2432% ( 2) 00:08:22.962 4436.283 - 4461.489: 0.2614% ( 3) 00:08:22.962 4461.489 - 4486.695: 0.2736% ( 2) 00:08:22.962 4486.695 - 4511.902: 0.2857% ( 2) 00:08:22.962 4511.902 - 4537.108: 0.3040% ( 3) 00:08:22.962 4537.108 - 4562.314: 0.3161% ( 2) 00:08:22.962 4562.314 - 4587.520: 0.3344% ( 3) 00:08:22.962 4587.520 - 4612.726: 0.3465% ( 2) 00:08:22.962 4612.726 - 4637.932: 0.3648% ( 3) 00:08:22.962 4637.932 - 4663.138: 0.3769% ( 2) 00:08:22.962 4663.138 - 4688.345: 0.3891% ( 2) 00:08:22.962 6225.920 - 6251.126: 0.4195% ( 5) 00:08:22.962 6251.126 - 6276.332: 0.4438% ( 4) 00:08:22.962 6276.332 - 6301.538: 0.4681% ( 4) 00:08:22.962 6301.538 - 6326.745: 0.5107% ( 7) 00:08:22.962 6326.745 - 6351.951: 0.5593% ( 8) 00:08:22.962 6351.951 - 6377.157: 0.6080% ( 8) 00:08:22.962 6377.157 - 6402.363: 0.6688% ( 10) 00:08:22.962 6402.363 - 6427.569: 0.7296% ( 10) 00:08:22.962 6427.569 - 6452.775: 0.7904% ( 10) 00:08:22.962 6452.775 - 6503.188: 0.9606% ( 28) 00:08:22.962 6503.188 - 6553.600: 1.1856% ( 37) 00:08:22.962 6553.600 - 6604.012: 1.3983% ( 35) 00:08:22.962 6604.012 - 6654.425: 1.7145% ( 52) 00:08:22.962 6654.425 - 6704.837: 2.1705% ( 75) 00:08:22.962 6704.837 - 6755.249: 2.8453% ( 111) 00:08:22.962 6755.249 - 6805.662: 3.7573% ( 150) 00:08:22.962 6805.662 - 6856.074: 4.7848% ( 169) 00:08:22.962 6856.074 - 6906.486: 5.9825% ( 197) 00:08:22.962 6906.486 - 6956.898: 7.2532% ( 209) 00:08:22.962 6956.898 - 7007.311: 9.0345% ( 293) 00:08:22.962 7007.311 - 7057.723: 11.0895% ( 338) 00:08:22.962 7057.723 - 7108.135: 13.7220% ( 433) 00:08:22.962 7108.135 - 7158.548: 16.7923% ( 505) 00:08:22.962 7158.548 - 7208.960: 20.4280% ( 598) 00:08:22.962 7208.960 - 7259.372: 24.5683% ( 681) 00:08:22.962 7259.372 - 7309.785: 29.2011% ( 762) 00:08:22.962 7309.785 - 7360.197: 33.8886% ( 771) 00:08:22.962 7360.197 - 7410.609: 38.9591% ( 834) 00:08:22.962 7410.609 - 7461.022: 43.9446% ( 820) 00:08:22.962 7461.022 - 7511.434: 48.9604% ( 825) 00:08:22.962 7511.434 - 7561.846: 53.8911% ( 811) 00:08:22.962 7561.846 - 7612.258: 58.7062% ( 792) 00:08:22.962 7612.258 - 7662.671: 63.2357% ( 745) 00:08:22.962 7662.671 - 7713.083: 67.6921% ( 733) 00:08:22.962 7713.083 - 7763.495: 71.9236% ( 696) 00:08:22.962 7763.495 - 7813.908: 75.8998% ( 654) 00:08:22.962 7813.908 - 7864.320: 79.6024% ( 609) 00:08:22.962 7864.320 - 7914.732: 82.9098% ( 544) 00:08:22.962 7914.732 - 7965.145: 85.6761% ( 455) 00:08:22.962 7965.145 - 8015.557: 87.9621% ( 376) 00:08:22.962 8015.557 - 8065.969: 89.7374% ( 292) 00:08:22.962 8065.969 - 8116.382: 91.0263% ( 212) 00:08:22.962 8116.382 - 8166.794: 91.9382% ( 150) 00:08:22.962 8166.794 - 8217.206: 92.5462% ( 100) 00:08:22.962 8217.206 - 8267.618: 92.9657% ( 69) 00:08:22.962 8267.618 - 8318.031: 93.2940% ( 54) 00:08:22.962 8318.031 - 8368.443: 93.5372% ( 40) 00:08:22.962 8368.443 - 8418.855: 93.7014% ( 27) 00:08:22.962 8418.855 - 8469.268: 93.8230% ( 20) 00:08:22.962 8469.268 - 8519.680: 93.9446% ( 20) 00:08:22.962 8519.680 - 8570.092: 94.0479% ( 17) 00:08:22.962 8570.092 - 8620.505: 94.1452% ( 16) 00:08:22.962 8620.505 - 8670.917: 94.2121% ( 11) 00:08:22.962 8670.917 - 8721.329: 94.2485% ( 6) 00:08:22.962 8721.329 - 8771.742: 94.2789% ( 5) 00:08:22.962 8771.742 - 8822.154: 94.3093% ( 5) 00:08:22.963 8822.154 - 8872.566: 94.3337% ( 4) 00:08:22.963 8872.566 - 8922.978: 94.3519% ( 3) 00:08:22.963 8922.978 - 8973.391: 94.3762% ( 4) 00:08:22.963 8973.391 - 9023.803: 94.4188% ( 7) 00:08:22.963 9023.803 - 9074.215: 94.4613% ( 7) 00:08:22.963 9074.215 - 9124.628: 94.5039% ( 7) 00:08:22.963 9124.628 - 9175.040: 94.5464% ( 7) 00:08:22.963 9175.040 - 9225.452: 94.5890% ( 7) 00:08:22.963 9225.452 - 9275.865: 94.6437% ( 9) 00:08:22.963 9275.865 - 9326.277: 94.6924% ( 8) 00:08:22.963 9326.277 - 9376.689: 94.7532% ( 10) 00:08:22.963 9376.689 - 9427.102: 94.8018% ( 8) 00:08:22.963 9427.102 - 9477.514: 94.8565% ( 9) 00:08:22.963 9477.514 - 9527.926: 94.8991% ( 7) 00:08:22.963 9527.926 - 9578.338: 94.9660% ( 11) 00:08:22.963 9578.338 - 9628.751: 95.0146% ( 8) 00:08:22.963 9628.751 - 9679.163: 95.0754% ( 10) 00:08:22.963 9679.163 - 9729.575: 95.1240% ( 8) 00:08:22.963 9729.575 - 9779.988: 95.1848% ( 10) 00:08:22.963 9779.988 - 9830.400: 95.2335% ( 8) 00:08:22.963 9830.400 - 9880.812: 95.2943% ( 10) 00:08:22.963 9880.812 - 9931.225: 95.3247% ( 5) 00:08:22.963 9931.225 - 9981.637: 95.3611% ( 6) 00:08:22.963 9981.637 - 10032.049: 95.3915% ( 5) 00:08:22.963 10032.049 - 10082.462: 95.4219% ( 5) 00:08:22.963 10082.462 - 10132.874: 95.4584% ( 6) 00:08:22.963 10132.874 - 10183.286: 95.4888% ( 5) 00:08:22.963 10183.286 - 10233.698: 95.5192% ( 5) 00:08:22.963 10233.698 - 10284.111: 95.5496% ( 5) 00:08:22.963 10284.111 - 10334.523: 95.5861% ( 6) 00:08:22.963 10334.523 - 10384.935: 95.6165% ( 5) 00:08:22.963 10384.935 - 10435.348: 95.6286% ( 2) 00:08:22.963 10435.348 - 10485.760: 95.6408% ( 2) 00:08:22.963 10485.760 - 10536.172: 95.6712% ( 5) 00:08:22.963 10536.172 - 10586.585: 95.7198% ( 8) 00:08:22.963 10586.585 - 10636.997: 95.7867% ( 11) 00:08:22.963 10636.997 - 10687.409: 95.8718% ( 14) 00:08:22.963 10687.409 - 10737.822: 95.9570% ( 14) 00:08:22.963 10737.822 - 10788.234: 96.0360% ( 13) 00:08:22.963 10788.234 - 10838.646: 96.1150% ( 13) 00:08:22.963 10838.646 - 10889.058: 96.2001% ( 14) 00:08:22.963 10889.058 - 10939.471: 96.2853% ( 14) 00:08:22.963 10939.471 - 10989.883: 96.3765% ( 15) 00:08:22.963 10989.883 - 11040.295: 96.4555% ( 13) 00:08:22.963 11040.295 - 11090.708: 96.5528% ( 16) 00:08:22.963 11090.708 - 11141.120: 96.6561% ( 17) 00:08:22.963 11141.120 - 11191.532: 96.7595% ( 17) 00:08:22.963 11191.532 - 11241.945: 96.8811% ( 20) 00:08:22.963 11241.945 - 11292.357: 96.9844% ( 17) 00:08:22.963 11292.357 - 11342.769: 97.1000% ( 19) 00:08:22.963 11342.769 - 11393.182: 97.2033% ( 17) 00:08:22.963 11393.182 - 11443.594: 97.2884% ( 14) 00:08:22.963 11443.594 - 11494.006: 97.3492% ( 10) 00:08:22.963 11494.006 - 11544.418: 97.4161% ( 11) 00:08:22.963 11544.418 - 11594.831: 97.4830% ( 11) 00:08:22.963 11594.831 - 11645.243: 97.5499% ( 11) 00:08:22.963 11645.243 - 11695.655: 97.6107% ( 10) 00:08:22.963 11695.655 - 11746.068: 97.6654% ( 9) 00:08:22.963 11746.068 - 11796.480: 97.7140% ( 8) 00:08:22.963 11796.480 - 11846.892: 97.7566% ( 7) 00:08:22.963 11846.892 - 11897.305: 97.8052% ( 8) 00:08:22.963 11897.305 - 11947.717: 97.8478% ( 7) 00:08:22.963 11947.717 - 11998.129: 97.8842% ( 6) 00:08:22.963 11998.129 - 12048.542: 97.9025% ( 3) 00:08:22.963 12048.542 - 12098.954: 97.9207% ( 3) 00:08:22.963 12098.954 - 12149.366: 97.9450% ( 4) 00:08:22.963 12149.366 - 12199.778: 97.9572% ( 2) 00:08:22.963 12199.778 - 12250.191: 97.9694% ( 2) 00:08:22.963 12250.191 - 12300.603: 97.9998% ( 5) 00:08:22.963 12300.603 - 12351.015: 98.0302% ( 5) 00:08:22.963 12351.015 - 12401.428: 98.0666% ( 6) 00:08:22.963 12401.428 - 12451.840: 98.0970% ( 5) 00:08:22.963 12451.840 - 12502.252: 98.1335% ( 6) 00:08:22.963 12502.252 - 12552.665: 98.1578% ( 4) 00:08:22.963 12552.665 - 12603.077: 98.2186% ( 10) 00:08:22.963 12603.077 - 12653.489: 98.2673% ( 8) 00:08:22.963 12653.489 - 12703.902: 98.3098% ( 7) 00:08:22.963 12703.902 - 12754.314: 98.3645% ( 9) 00:08:22.963 12754.314 - 12804.726: 98.4132% ( 8) 00:08:22.963 12804.726 - 12855.138: 98.4740% ( 10) 00:08:22.963 12855.138 - 12905.551: 98.5287% ( 9) 00:08:22.963 12905.551 - 13006.375: 98.6321% ( 17) 00:08:22.963 13006.375 - 13107.200: 98.7415% ( 18) 00:08:22.963 13107.200 - 13208.025: 98.8570% ( 19) 00:08:22.963 13208.025 - 13308.849: 98.9239% ( 11) 00:08:22.963 13308.849 - 13409.674: 98.9908% ( 11) 00:08:22.963 13409.674 - 13510.498: 99.0272% ( 6) 00:08:22.963 13510.498 - 13611.323: 99.0516% ( 4) 00:08:22.963 13611.323 - 13712.148: 99.0698% ( 3) 00:08:22.963 13712.148 - 13812.972: 99.0941% ( 4) 00:08:22.963 13812.972 - 13913.797: 99.1184% ( 4) 00:08:22.963 13913.797 - 14014.622: 99.1428% ( 4) 00:08:22.963 14014.622 - 14115.446: 99.1671% ( 4) 00:08:22.963 14115.446 - 14216.271: 99.1914% ( 4) 00:08:22.963 14216.271 - 14317.095: 99.2157% ( 4) 00:08:22.963 14317.095 - 14417.920: 99.2218% ( 1) 00:08:22.963 16837.711 - 16938.535: 99.2400% ( 3) 00:08:22.963 16938.535 - 17039.360: 99.2583% ( 3) 00:08:22.963 17039.360 - 17140.185: 99.2826% ( 4) 00:08:22.963 17140.185 - 17241.009: 99.3069% ( 4) 00:08:22.963 17241.009 - 17341.834: 99.3312% ( 4) 00:08:22.963 17341.834 - 17442.658: 99.3495% ( 3) 00:08:22.963 17442.658 - 17543.483: 99.3738% ( 4) 00:08:22.963 17543.483 - 17644.308: 99.3981% ( 4) 00:08:22.963 17644.308 - 17745.132: 99.4224% ( 4) 00:08:22.963 17745.132 - 17845.957: 99.4407% ( 3) 00:08:22.963 17845.957 - 17946.782: 99.4650% ( 4) 00:08:22.963 17946.782 - 18047.606: 99.4893% ( 4) 00:08:22.963 18047.606 - 18148.431: 99.5075% ( 3) 00:08:22.963 18148.431 - 18249.255: 99.5319% ( 4) 00:08:22.963 18249.255 - 18350.080: 99.5562% ( 4) 00:08:22.963 18350.080 - 18450.905: 99.5805% ( 4) 00:08:22.963 18450.905 - 18551.729: 99.5987% ( 3) 00:08:22.963 18551.729 - 18652.554: 99.6109% ( 2) 00:08:22.963 25004.505 - 25105.329: 99.6231% ( 2) 00:08:22.963 25105.329 - 25206.154: 99.6413% ( 3) 00:08:22.963 25206.154 - 25306.978: 99.6656% ( 4) 00:08:22.963 25306.978 - 25407.803: 99.6839% ( 3) 00:08:22.963 25407.803 - 25508.628: 99.7082% ( 4) 00:08:22.963 25508.628 - 25609.452: 99.7325% ( 4) 00:08:22.963 25609.452 - 25710.277: 99.7568% ( 4) 00:08:22.963 25710.277 - 25811.102: 99.7811% ( 4) 00:08:22.963 25811.102 - 26012.751: 99.8237% ( 7) 00:08:22.963 26012.751 - 26214.400: 99.8723% ( 8) 00:08:22.963 26214.400 - 26416.049: 99.9149% ( 7) 00:08:22.963 26416.049 - 26617.698: 99.9635% ( 8) 00:08:22.963 26617.698 - 26819.348: 100.0000% ( 6) 00:08:22.963 00:08:22.963 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:22.963 ============================================================================== 00:08:22.963 Range in us Cumulative IO count 00:08:22.963 3780.923 - 3806.129: 0.0122% ( 2) 00:08:22.963 3806.129 - 3831.335: 0.0243% ( 2) 00:08:22.963 3831.335 - 3856.542: 0.0365% ( 2) 00:08:22.963 3856.542 - 3881.748: 0.0547% ( 3) 00:08:22.963 3881.748 - 3906.954: 0.0669% ( 2) 00:08:22.963 3906.954 - 3932.160: 0.0790% ( 2) 00:08:22.963 3932.160 - 3957.366: 0.1034% ( 4) 00:08:22.963 3957.366 - 3982.572: 0.1155% ( 2) 00:08:22.963 3982.572 - 4007.778: 0.1277% ( 2) 00:08:22.963 4007.778 - 4032.985: 0.1398% ( 2) 00:08:22.963 4032.985 - 4058.191: 0.1520% ( 2) 00:08:22.963 4058.191 - 4083.397: 0.1642% ( 2) 00:08:22.963 4083.397 - 4108.603: 0.1824% ( 3) 00:08:22.963 4108.603 - 4133.809: 0.1946% ( 2) 00:08:22.963 4133.809 - 4159.015: 0.2067% ( 2) 00:08:22.963 4159.015 - 4184.222: 0.2189% ( 2) 00:08:22.963 4184.222 - 4209.428: 0.2371% ( 3) 00:08:22.963 4209.428 - 4234.634: 0.2493% ( 2) 00:08:22.963 4234.634 - 4259.840: 0.2675% ( 3) 00:08:22.963 4259.840 - 4285.046: 0.2797% ( 2) 00:08:22.963 4285.046 - 4310.252: 0.2918% ( 2) 00:08:22.963 4310.252 - 4335.458: 0.3101% ( 3) 00:08:22.963 4335.458 - 4360.665: 0.3222% ( 2) 00:08:22.963 4360.665 - 4385.871: 0.3405% ( 3) 00:08:22.963 4385.871 - 4411.077: 0.3526% ( 2) 00:08:22.964 4411.077 - 4436.283: 0.3648% ( 2) 00:08:22.964 4436.283 - 4461.489: 0.3769% ( 2) 00:08:22.964 4461.489 - 4486.695: 0.3891% ( 2) 00:08:22.964 5923.446 - 5948.652: 0.4013% ( 2) 00:08:22.964 5948.652 - 5973.858: 0.4134% ( 2) 00:08:22.964 5973.858 - 5999.065: 0.4195% ( 1) 00:08:22.964 5999.065 - 6024.271: 0.4377% ( 3) 00:08:22.964 6024.271 - 6049.477: 0.4499% ( 2) 00:08:22.964 6049.477 - 6074.683: 0.4681% ( 3) 00:08:22.964 6074.683 - 6099.889: 0.4864% ( 3) 00:08:22.964 6099.889 - 6125.095: 0.5046% ( 3) 00:08:22.964 6125.095 - 6150.302: 0.5168% ( 2) 00:08:22.964 6150.302 - 6175.508: 0.5350% ( 3) 00:08:22.964 6175.508 - 6200.714: 0.5472% ( 2) 00:08:22.964 6200.714 - 6225.920: 0.5654% ( 3) 00:08:22.964 6225.920 - 6251.126: 0.5776% ( 2) 00:08:22.964 6251.126 - 6276.332: 0.5897% ( 2) 00:08:22.964 6276.332 - 6301.538: 0.6019% ( 2) 00:08:22.964 6301.538 - 6326.745: 0.6201% ( 3) 00:08:22.964 6326.745 - 6351.951: 0.6323% ( 2) 00:08:22.964 6351.951 - 6377.157: 0.6809% ( 8) 00:08:22.964 6377.157 - 6402.363: 0.7174% ( 6) 00:08:22.964 6402.363 - 6427.569: 0.7417% ( 4) 00:08:22.964 6427.569 - 6452.775: 0.8208% ( 13) 00:08:22.964 6452.775 - 6503.188: 0.9849% ( 27) 00:08:22.964 6503.188 - 6553.600: 1.1308% ( 24) 00:08:22.964 6553.600 - 6604.012: 1.3679% ( 39) 00:08:22.964 6604.012 - 6654.425: 1.6598% ( 48) 00:08:22.964 6654.425 - 6704.837: 1.9759% ( 52) 00:08:22.964 6704.837 - 6755.249: 2.4805% ( 83) 00:08:22.964 6755.249 - 6805.662: 3.1676% ( 113) 00:08:22.964 6805.662 - 6856.074: 4.1038% ( 154) 00:08:22.964 6856.074 - 6906.486: 5.2104% ( 182) 00:08:22.964 6906.486 - 6956.898: 6.5783% ( 225) 00:08:22.964 6956.898 - 7007.311: 8.3718% ( 295) 00:08:22.964 7007.311 - 7057.723: 10.6578% ( 376) 00:08:22.964 7057.723 - 7108.135: 13.3937% ( 450) 00:08:22.964 7108.135 - 7158.548: 16.7254% ( 548) 00:08:22.964 7158.548 - 7208.960: 20.6530% ( 646) 00:08:22.964 7208.960 - 7259.372: 25.0973% ( 731) 00:08:22.964 7259.372 - 7309.785: 29.5051% ( 725) 00:08:22.964 7309.785 - 7360.197: 34.3203% ( 792) 00:08:22.964 7360.197 - 7410.609: 39.3361% ( 825) 00:08:22.964 7410.609 - 7461.022: 44.3093% ( 818) 00:08:22.964 7461.022 - 7511.434: 49.2765% ( 817) 00:08:22.964 7511.434 - 7561.846: 54.2741% ( 822) 00:08:22.964 7561.846 - 7612.258: 59.0589% ( 787) 00:08:22.964 7612.258 - 7662.671: 63.7281% ( 768) 00:08:22.964 7662.671 - 7713.083: 68.2089% ( 737) 00:08:22.964 7713.083 - 7763.495: 72.4891% ( 704) 00:08:22.964 7763.495 - 7813.908: 76.5321% ( 665) 00:08:22.964 7813.908 - 7864.320: 80.1496% ( 595) 00:08:22.964 7864.320 - 7914.732: 83.2989% ( 518) 00:08:22.964 7914.732 - 7965.145: 85.9557% ( 437) 00:08:22.964 7965.145 - 8015.557: 87.9317% ( 325) 00:08:22.964 8015.557 - 8065.969: 89.4820% ( 255) 00:08:22.964 8065.969 - 8116.382: 90.7284% ( 205) 00:08:22.964 8116.382 - 8166.794: 91.6282% ( 148) 00:08:22.964 8166.794 - 8217.206: 92.2057% ( 95) 00:08:22.964 8217.206 - 8267.618: 92.7225% ( 85) 00:08:22.964 8267.618 - 8318.031: 93.0812% ( 59) 00:08:22.964 8318.031 - 8368.443: 93.3974% ( 52) 00:08:22.964 8368.443 - 8418.855: 93.5919% ( 32) 00:08:22.964 8418.855 - 8469.268: 93.7682% ( 29) 00:08:22.964 8469.268 - 8519.680: 93.9081% ( 23) 00:08:22.964 8519.680 - 8570.092: 94.0601% ( 25) 00:08:22.964 8570.092 - 8620.505: 94.2181% ( 26) 00:08:22.964 8620.505 - 8670.917: 94.3701% ( 25) 00:08:22.964 8670.917 - 8721.329: 94.4613% ( 15) 00:08:22.964 8721.329 - 8771.742: 94.5404% ( 13) 00:08:22.964 8771.742 - 8822.154: 94.6133% ( 12) 00:08:22.964 8822.154 - 8872.566: 94.6924% ( 13) 00:08:22.964 8872.566 - 8922.978: 94.7592% ( 11) 00:08:22.964 8922.978 - 8973.391: 94.8261% ( 11) 00:08:22.964 8973.391 - 9023.803: 94.8869% ( 10) 00:08:22.964 9023.803 - 9074.215: 94.9538% ( 11) 00:08:22.964 9074.215 - 9124.628: 95.0024% ( 8) 00:08:22.964 9124.628 - 9175.040: 95.0632% ( 10) 00:08:22.964 9175.040 - 9225.452: 95.1119% ( 8) 00:08:22.964 9225.452 - 9275.865: 95.1727% ( 10) 00:08:22.964 9275.865 - 9326.277: 95.2152% ( 7) 00:08:22.964 9326.277 - 9376.689: 95.2639% ( 8) 00:08:22.964 9376.689 - 9427.102: 95.2943% ( 5) 00:08:22.964 9427.102 - 9477.514: 95.3307% ( 6) 00:08:22.964 9477.514 - 9527.926: 95.3611% ( 5) 00:08:22.964 9527.926 - 9578.338: 95.3976% ( 6) 00:08:22.964 9578.338 - 9628.751: 95.4280% ( 5) 00:08:22.964 9628.751 - 9679.163: 95.4706% ( 7) 00:08:22.964 9679.163 - 9729.575: 95.5010% ( 5) 00:08:22.964 9729.575 - 9779.988: 95.5375% ( 6) 00:08:22.964 9779.988 - 9830.400: 95.5679% ( 5) 00:08:22.964 9830.400 - 9880.812: 95.5982% ( 5) 00:08:22.964 9880.812 - 9931.225: 95.6347% ( 6) 00:08:22.964 9931.225 - 9981.637: 95.6651% ( 5) 00:08:22.964 9981.637 - 10032.049: 95.6834% ( 3) 00:08:22.964 10032.049 - 10082.462: 95.6955% ( 2) 00:08:22.964 10082.462 - 10132.874: 95.7077% ( 2) 00:08:22.964 10132.874 - 10183.286: 95.7198% ( 2) 00:08:22.964 10183.286 - 10233.698: 95.7320% ( 2) 00:08:22.964 10233.698 - 10284.111: 95.7563% ( 4) 00:08:22.964 10284.111 - 10334.523: 95.7806% ( 4) 00:08:22.964 10334.523 - 10384.935: 95.8050% ( 4) 00:08:22.964 10384.935 - 10435.348: 95.8293% ( 4) 00:08:22.964 10435.348 - 10485.760: 95.8536% ( 4) 00:08:22.964 10485.760 - 10536.172: 95.8779% ( 4) 00:08:22.964 10536.172 - 10586.585: 95.8962% ( 3) 00:08:22.964 10586.585 - 10636.997: 95.9205% ( 4) 00:08:22.964 10636.997 - 10687.409: 95.9448% ( 4) 00:08:22.964 10687.409 - 10737.822: 95.9691% ( 4) 00:08:22.964 10737.822 - 10788.234: 95.9934% ( 4) 00:08:22.964 10788.234 - 10838.646: 96.0117% ( 3) 00:08:22.964 10838.646 - 10889.058: 96.0482% ( 6) 00:08:22.964 10889.058 - 10939.471: 96.0907% ( 7) 00:08:22.964 10939.471 - 10989.883: 96.1393% ( 8) 00:08:22.964 10989.883 - 11040.295: 96.2123% ( 12) 00:08:22.964 11040.295 - 11090.708: 96.2488% ( 6) 00:08:22.964 11090.708 - 11141.120: 96.2974% ( 8) 00:08:22.964 11141.120 - 11191.532: 96.3521% ( 9) 00:08:22.964 11191.532 - 11241.945: 96.3886% ( 6) 00:08:22.964 11241.945 - 11292.357: 96.4373% ( 8) 00:08:22.964 11292.357 - 11342.769: 96.4859% ( 8) 00:08:22.964 11342.769 - 11393.182: 96.5649% ( 13) 00:08:22.964 11393.182 - 11443.594: 96.6440% ( 13) 00:08:22.964 11443.594 - 11494.006: 96.7291% ( 14) 00:08:22.964 11494.006 - 11544.418: 96.8020% ( 12) 00:08:22.964 11544.418 - 11594.831: 96.8811% ( 13) 00:08:22.964 11594.831 - 11645.243: 96.9601% ( 13) 00:08:22.964 11645.243 - 11695.655: 97.0696% ( 18) 00:08:22.964 11695.655 - 11746.068: 97.1851% ( 19) 00:08:22.964 11746.068 - 11796.480: 97.2945% ( 18) 00:08:22.964 11796.480 - 11846.892: 97.3918% ( 16) 00:08:22.964 11846.892 - 11897.305: 97.4708% ( 13) 00:08:22.964 11897.305 - 11947.717: 97.5438% ( 12) 00:08:22.964 11947.717 - 11998.129: 97.6228% ( 13) 00:08:22.964 11998.129 - 12048.542: 97.7079% ( 14) 00:08:22.964 12048.542 - 12098.954: 97.7930% ( 14) 00:08:22.965 12098.954 - 12149.366: 97.8903% ( 16) 00:08:22.965 12149.366 - 12199.778: 97.9937% ( 17) 00:08:22.965 12199.778 - 12250.191: 98.0910% ( 16) 00:08:22.965 12250.191 - 12300.603: 98.1761% ( 14) 00:08:22.965 12300.603 - 12351.015: 98.2612% ( 14) 00:08:22.965 12351.015 - 12401.428: 98.3463% ( 14) 00:08:22.965 12401.428 - 12451.840: 98.4436% ( 16) 00:08:22.965 12451.840 - 12502.252: 98.5409% ( 16) 00:08:22.965 12502.252 - 12552.665: 98.6017% ( 10) 00:08:22.965 12552.665 - 12603.077: 98.6685% ( 11) 00:08:22.965 12603.077 - 12653.489: 98.7050% ( 6) 00:08:22.965 12653.489 - 12703.902: 98.7354% ( 5) 00:08:22.965 12703.902 - 12754.314: 98.7719% ( 6) 00:08:22.965 12754.314 - 12804.726: 98.8084% ( 6) 00:08:22.965 12804.726 - 12855.138: 98.8388% ( 5) 00:08:22.965 12855.138 - 12905.551: 98.8752% ( 6) 00:08:22.965 12905.551 - 13006.375: 98.9421% ( 11) 00:08:22.965 13006.375 - 13107.200: 99.0090% ( 11) 00:08:22.965 13107.200 - 13208.025: 99.0698% ( 10) 00:08:22.965 13208.025 - 13308.849: 99.1306% ( 10) 00:08:22.965 13308.849 - 13409.674: 99.1549% ( 4) 00:08:22.965 13409.674 - 13510.498: 99.1792% ( 4) 00:08:22.965 13510.498 - 13611.323: 99.2036% ( 4) 00:08:22.965 13611.323 - 13712.148: 99.2218% ( 3) 00:08:22.965 16938.535 - 17039.360: 99.2339% ( 2) 00:08:22.965 17039.360 - 17140.185: 99.2522% ( 3) 00:08:22.965 17140.185 - 17241.009: 99.2765% ( 4) 00:08:22.965 17241.009 - 17341.834: 99.3008% ( 4) 00:08:22.965 17341.834 - 17442.658: 99.3251% ( 4) 00:08:22.965 17442.658 - 17543.483: 99.3434% ( 3) 00:08:22.965 17543.483 - 17644.308: 99.3677% ( 4) 00:08:22.965 17644.308 - 17745.132: 99.3859% ( 3) 00:08:22.965 17745.132 - 17845.957: 99.4163% ( 5) 00:08:22.965 17845.957 - 17946.782: 99.4407% ( 4) 00:08:22.965 17946.782 - 18047.606: 99.4650% ( 4) 00:08:22.965 18047.606 - 18148.431: 99.4893% ( 4) 00:08:22.965 18148.431 - 18249.255: 99.5075% ( 3) 00:08:22.965 18249.255 - 18350.080: 99.5258% ( 3) 00:08:22.965 18350.080 - 18450.905: 99.5501% ( 4) 00:08:22.965 18450.905 - 18551.729: 99.5744% ( 4) 00:08:22.965 18551.729 - 18652.554: 99.5987% ( 4) 00:08:22.965 18652.554 - 18753.378: 99.6109% ( 2) 00:08:22.965 25105.329 - 25206.154: 99.6170% ( 1) 00:08:22.965 25206.154 - 25306.978: 99.6413% ( 4) 00:08:22.965 25306.978 - 25407.803: 99.6595% ( 3) 00:08:22.965 25407.803 - 25508.628: 99.6839% ( 4) 00:08:22.965 25508.628 - 25609.452: 99.7082% ( 4) 00:08:22.965 25609.452 - 25710.277: 99.7325% ( 4) 00:08:22.965 25710.277 - 25811.102: 99.7568% ( 4) 00:08:22.965 25811.102 - 26012.751: 99.7994% ( 7) 00:08:22.965 26012.751 - 26214.400: 99.8480% ( 8) 00:08:22.965 26214.400 - 26416.049: 99.8845% ( 6) 00:08:22.965 26416.049 - 26617.698: 99.9331% ( 8) 00:08:22.965 26617.698 - 26819.348: 99.9757% ( 7) 00:08:22.965 26819.348 - 27020.997: 100.0000% ( 4) 00:08:22.965 00:08:22.965 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:22.965 ============================================================================== 00:08:22.965 Range in us Cumulative IO count 00:08:22.965 3554.068 - 3579.274: 0.0182% ( 3) 00:08:22.965 3579.274 - 3604.480: 0.0304% ( 2) 00:08:22.965 3604.480 - 3629.686: 0.0426% ( 2) 00:08:22.965 3629.686 - 3654.892: 0.0608% ( 3) 00:08:22.965 3654.892 - 3680.098: 0.0730% ( 2) 00:08:22.965 3680.098 - 3705.305: 0.0912% ( 3) 00:08:22.965 3705.305 - 3730.511: 0.1094% ( 3) 00:08:22.965 3730.511 - 3755.717: 0.1216% ( 2) 00:08:22.965 3755.717 - 3780.923: 0.1338% ( 2) 00:08:22.965 3780.923 - 3806.129: 0.1459% ( 2) 00:08:22.965 3806.129 - 3831.335: 0.1642% ( 3) 00:08:22.965 3831.335 - 3856.542: 0.1763% ( 2) 00:08:22.965 3856.542 - 3881.748: 0.1824% ( 1) 00:08:22.965 3881.748 - 3906.954: 0.2006% ( 3) 00:08:22.965 3906.954 - 3932.160: 0.2128% ( 2) 00:08:22.965 3932.160 - 3957.366: 0.2250% ( 2) 00:08:22.965 3957.366 - 3982.572: 0.2432% ( 3) 00:08:22.965 3982.572 - 4007.778: 0.2554% ( 2) 00:08:22.965 4007.778 - 4032.985: 0.2675% ( 2) 00:08:22.965 4032.985 - 4058.191: 0.2857% ( 3) 00:08:22.965 4058.191 - 4083.397: 0.2979% ( 2) 00:08:22.965 4083.397 - 4108.603: 0.3161% ( 3) 00:08:22.965 4108.603 - 4133.809: 0.3283% ( 2) 00:08:22.965 4133.809 - 4159.015: 0.3405% ( 2) 00:08:22.965 4159.015 - 4184.222: 0.3587% ( 3) 00:08:22.965 4184.222 - 4209.428: 0.3648% ( 1) 00:08:22.965 4209.428 - 4234.634: 0.3830% ( 3) 00:08:22.965 4234.634 - 4259.840: 0.3891% ( 1) 00:08:22.965 5595.766 - 5620.972: 0.3952% ( 1) 00:08:22.965 5620.972 - 5646.178: 0.4134% ( 3) 00:08:22.965 5646.178 - 5671.385: 0.4256% ( 2) 00:08:22.965 5671.385 - 5696.591: 0.4377% ( 2) 00:08:22.965 5696.591 - 5721.797: 0.4499% ( 2) 00:08:22.965 5721.797 - 5747.003: 0.4681% ( 3) 00:08:22.965 5747.003 - 5772.209: 0.4864% ( 3) 00:08:22.965 5772.209 - 5797.415: 0.5046% ( 3) 00:08:22.965 5797.415 - 5822.622: 0.5168% ( 2) 00:08:22.965 5822.622 - 5847.828: 0.5350% ( 3) 00:08:22.965 5847.828 - 5873.034: 0.5472% ( 2) 00:08:22.965 5873.034 - 5898.240: 0.5654% ( 3) 00:08:22.965 5898.240 - 5923.446: 0.5776% ( 2) 00:08:22.965 5923.446 - 5948.652: 0.5958% ( 3) 00:08:22.965 5948.652 - 5973.858: 0.6080% ( 2) 00:08:22.965 5973.858 - 5999.065: 0.6201% ( 2) 00:08:22.965 5999.065 - 6024.271: 0.6323% ( 2) 00:08:22.965 6024.271 - 6049.477: 0.6445% ( 2) 00:08:22.965 6049.477 - 6074.683: 0.6566% ( 2) 00:08:22.965 6074.683 - 6099.889: 0.6749% ( 3) 00:08:22.965 6099.889 - 6125.095: 0.6870% ( 2) 00:08:22.965 6125.095 - 6150.302: 0.7053% ( 3) 00:08:22.965 6150.302 - 6175.508: 0.7174% ( 2) 00:08:22.965 6175.508 - 6200.714: 0.7357% ( 3) 00:08:22.965 6200.714 - 6225.920: 0.7478% ( 2) 00:08:22.965 6225.920 - 6251.126: 0.7600% ( 2) 00:08:22.965 6251.126 - 6276.332: 0.7782% ( 3) 00:08:22.965 6351.951 - 6377.157: 0.7964% ( 3) 00:08:22.965 6377.157 - 6402.363: 0.8208% ( 4) 00:08:22.965 6402.363 - 6427.569: 0.8390% ( 3) 00:08:22.965 6427.569 - 6452.775: 0.8633% ( 4) 00:08:22.965 6452.775 - 6503.188: 0.9302% ( 11) 00:08:22.965 6503.188 - 6553.600: 1.1004% ( 28) 00:08:22.965 6553.600 - 6604.012: 1.3011% ( 33) 00:08:22.965 6604.012 - 6654.425: 1.5199% ( 36) 00:08:22.965 6654.425 - 6704.837: 1.8786% ( 59) 00:08:22.965 6704.837 - 6755.249: 2.4015% ( 86) 00:08:22.965 6755.249 - 6805.662: 3.0764% ( 111) 00:08:22.965 6805.662 - 6856.074: 3.9944% ( 151) 00:08:22.965 6856.074 - 6906.486: 5.1435% ( 189) 00:08:22.965 6906.486 - 6956.898: 6.6148% ( 242) 00:08:22.965 6956.898 - 7007.311: 8.3171% ( 280) 00:08:22.965 7007.311 - 7057.723: 10.4329% ( 348) 00:08:22.965 7057.723 - 7108.135: 13.2539% ( 464) 00:08:22.965 7108.135 - 7158.548: 16.7254% ( 571) 00:08:22.965 7158.548 - 7208.960: 20.5982% ( 637) 00:08:22.965 7208.960 - 7259.372: 24.8054% ( 692) 00:08:22.965 7259.372 - 7309.785: 29.3653% ( 750) 00:08:22.965 7309.785 - 7360.197: 34.4176% ( 831) 00:08:22.965 7360.197 - 7410.609: 39.3604% ( 813) 00:08:22.965 7410.609 - 7461.022: 44.4796% ( 842) 00:08:22.965 7461.022 - 7511.434: 49.5258% ( 830) 00:08:22.965 7511.434 - 7561.846: 54.5051% ( 819) 00:08:22.965 7561.846 - 7612.258: 59.3081% ( 790) 00:08:22.965 7612.258 - 7662.671: 64.1355% ( 794) 00:08:22.965 7662.671 - 7713.083: 68.5798% ( 731) 00:08:22.965 7713.083 - 7763.495: 72.8174% ( 697) 00:08:22.965 7763.495 - 7813.908: 76.7571% ( 648) 00:08:22.965 7813.908 - 7864.320: 80.4232% ( 603) 00:08:22.965 7864.320 - 7914.732: 83.5238% ( 510) 00:08:22.965 7914.732 - 7965.145: 86.0530% ( 416) 00:08:22.965 7965.145 - 8015.557: 87.9803% ( 317) 00:08:22.965 8015.557 - 8065.969: 89.4394% ( 240) 00:08:22.965 8065.969 - 8116.382: 90.6372% ( 197) 00:08:22.965 8116.382 - 8166.794: 91.4579% ( 135) 00:08:22.965 8166.794 - 8217.206: 91.9929% ( 88) 00:08:22.965 8217.206 - 8267.618: 92.3942% ( 66) 00:08:22.965 8267.618 - 8318.031: 92.7408% ( 57) 00:08:22.965 8318.031 - 8368.443: 93.0326% ( 48) 00:08:22.965 8368.443 - 8418.855: 93.2211% ( 31) 00:08:22.965 8418.855 - 8469.268: 93.3974% ( 29) 00:08:22.965 8469.268 - 8519.680: 93.5494% ( 25) 00:08:22.965 8519.680 - 8570.092: 93.7135% ( 27) 00:08:22.965 8570.092 - 8620.505: 93.8594% ( 24) 00:08:22.965 8620.505 - 8670.917: 93.9993% ( 23) 00:08:22.965 8670.917 - 8721.329: 94.1452% ( 24) 00:08:22.965 8721.329 - 8771.742: 94.3154% ( 28) 00:08:22.965 8771.742 - 8822.154: 94.4492% ( 22) 00:08:22.965 8822.154 - 8872.566: 94.5829% ( 22) 00:08:22.965 8872.566 - 8922.978: 94.6984% ( 19) 00:08:22.966 8922.978 - 8973.391: 94.8200% ( 20) 00:08:22.966 8973.391 - 9023.803: 94.9295% ( 18) 00:08:22.966 9023.803 - 9074.215: 95.0207% ( 15) 00:08:22.966 9074.215 - 9124.628: 95.1301% ( 18) 00:08:22.966 9124.628 - 9175.040: 95.2091% ( 13) 00:08:22.966 9175.040 - 9225.452: 95.2943% ( 14) 00:08:22.966 9225.452 - 9275.865: 95.3733% ( 13) 00:08:22.966 9275.865 - 9326.277: 95.4280% ( 9) 00:08:22.966 9326.277 - 9376.689: 95.4827% ( 9) 00:08:22.966 9376.689 - 9427.102: 95.5375% ( 9) 00:08:22.966 9427.102 - 9477.514: 95.5922% ( 9) 00:08:22.966 9477.514 - 9527.926: 95.6530% ( 10) 00:08:22.966 9527.926 - 9578.338: 95.7077% ( 9) 00:08:22.966 9578.338 - 9628.751: 95.7198% ( 2) 00:08:22.966 9729.575 - 9779.988: 95.7442% ( 4) 00:08:22.966 9779.988 - 9830.400: 95.7685% ( 4) 00:08:22.966 9830.400 - 9880.812: 95.7928% ( 4) 00:08:22.966 9880.812 - 9931.225: 95.8171% ( 4) 00:08:22.966 9931.225 - 9981.637: 95.8414% ( 4) 00:08:22.966 9981.637 - 10032.049: 95.8658% ( 4) 00:08:22.966 10032.049 - 10082.462: 95.8901% ( 4) 00:08:22.966 10082.462 - 10132.874: 95.9144% ( 4) 00:08:22.966 10132.874 - 10183.286: 95.9326% ( 3) 00:08:22.966 10183.286 - 10233.698: 95.9570% ( 4) 00:08:22.966 10233.698 - 10284.111: 95.9813% ( 4) 00:08:22.966 10284.111 - 10334.523: 96.0056% ( 4) 00:08:22.966 10334.523 - 10384.935: 96.0299% ( 4) 00:08:22.966 10384.935 - 10435.348: 96.0482% ( 3) 00:08:22.966 10435.348 - 10485.760: 96.0725% ( 4) 00:08:22.966 10485.760 - 10536.172: 96.0968% ( 4) 00:08:22.966 10536.172 - 10586.585: 96.1089% ( 2) 00:08:22.966 10889.058 - 10939.471: 96.1272% ( 3) 00:08:22.966 10939.471 - 10989.883: 96.1515% ( 4) 00:08:22.966 10989.883 - 11040.295: 96.1819% ( 5) 00:08:22.966 11040.295 - 11090.708: 96.2245% ( 7) 00:08:22.966 11090.708 - 11141.120: 96.2670% ( 7) 00:08:22.966 11141.120 - 11191.532: 96.3339% ( 11) 00:08:22.966 11191.532 - 11241.945: 96.4069% ( 12) 00:08:22.966 11241.945 - 11292.357: 96.4737% ( 11) 00:08:22.966 11292.357 - 11342.769: 96.5589% ( 14) 00:08:22.966 11342.769 - 11393.182: 96.6379% ( 13) 00:08:22.966 11393.182 - 11443.594: 96.7230% ( 14) 00:08:22.966 11443.594 - 11494.006: 96.8203% ( 16) 00:08:22.966 11494.006 - 11544.418: 96.9176% ( 16) 00:08:22.966 11544.418 - 11594.831: 97.0209% ( 17) 00:08:22.966 11594.831 - 11645.243: 97.1243% ( 17) 00:08:22.966 11645.243 - 11695.655: 97.2215% ( 16) 00:08:22.966 11695.655 - 11746.068: 97.3188% ( 16) 00:08:22.966 11746.068 - 11796.480: 97.4222% ( 17) 00:08:22.966 11796.480 - 11846.892: 97.5255% ( 17) 00:08:22.966 11846.892 - 11897.305: 97.6167% ( 15) 00:08:22.966 11897.305 - 11947.717: 97.7262% ( 18) 00:08:22.966 11947.717 - 11998.129: 97.8174% ( 15) 00:08:22.966 11998.129 - 12048.542: 97.9025% ( 14) 00:08:22.966 12048.542 - 12098.954: 97.9937% ( 15) 00:08:22.966 12098.954 - 12149.366: 98.0666% ( 12) 00:08:22.966 12149.366 - 12199.778: 98.1700% ( 17) 00:08:22.966 12199.778 - 12250.191: 98.2551% ( 14) 00:08:22.966 12250.191 - 12300.603: 98.3463% ( 15) 00:08:22.966 12300.603 - 12351.015: 98.4193% ( 12) 00:08:22.966 12351.015 - 12401.428: 98.4983% ( 13) 00:08:22.966 12401.428 - 12451.840: 98.5713% ( 12) 00:08:22.966 12451.840 - 12502.252: 98.6503% ( 13) 00:08:22.966 12502.252 - 12552.665: 98.7232% ( 12) 00:08:22.966 12552.665 - 12603.077: 98.7962% ( 12) 00:08:22.966 12603.077 - 12653.489: 98.8752% ( 13) 00:08:22.966 12653.489 - 12703.902: 98.9543% ( 13) 00:08:22.966 12703.902 - 12754.314: 99.0029% ( 8) 00:08:22.966 12754.314 - 12804.726: 99.0516% ( 8) 00:08:22.966 12804.726 - 12855.138: 99.0759% ( 4) 00:08:22.966 12855.138 - 12905.551: 99.1124% ( 6) 00:08:22.966 12905.551 - 13006.375: 99.1792% ( 11) 00:08:22.966 13006.375 - 13107.200: 99.2218% ( 7) 00:08:22.966 17140.185 - 17241.009: 99.2461% ( 4) 00:08:22.966 17241.009 - 17341.834: 99.2765% ( 5) 00:08:22.966 17341.834 - 17442.658: 99.2887% ( 2) 00:08:22.966 17442.658 - 17543.483: 99.3069% ( 3) 00:08:22.966 17543.483 - 17644.308: 99.3312% ( 4) 00:08:22.966 17644.308 - 17745.132: 99.3616% ( 5) 00:08:22.966 17745.132 - 17845.957: 99.3859% ( 4) 00:08:22.966 17845.957 - 17946.782: 99.4042% ( 3) 00:08:22.966 17946.782 - 18047.606: 99.4285% ( 4) 00:08:22.966 18047.606 - 18148.431: 99.4528% ( 4) 00:08:22.966 18148.431 - 18249.255: 99.4771% ( 4) 00:08:22.966 18249.255 - 18350.080: 99.5015% ( 4) 00:08:22.966 18350.080 - 18450.905: 99.5258% ( 4) 00:08:22.966 18450.905 - 18551.729: 99.5440% ( 3) 00:08:22.966 18551.729 - 18652.554: 99.5683% ( 4) 00:08:22.966 18652.554 - 18753.378: 99.5866% ( 3) 00:08:22.966 18753.378 - 18854.203: 99.6109% ( 4) 00:08:22.966 25206.154 - 25306.978: 99.6170% ( 1) 00:08:22.966 25306.978 - 25407.803: 99.6413% ( 4) 00:08:22.966 25407.803 - 25508.628: 99.6595% ( 3) 00:08:22.966 25508.628 - 25609.452: 99.6778% ( 3) 00:08:22.966 25609.452 - 25710.277: 99.7021% ( 4) 00:08:22.966 25710.277 - 25811.102: 99.7203% ( 3) 00:08:22.966 25811.102 - 26012.751: 99.7690% ( 8) 00:08:22.966 26012.751 - 26214.400: 99.8115% ( 7) 00:08:22.966 26214.400 - 26416.049: 99.8602% ( 8) 00:08:22.966 26416.049 - 26617.698: 99.9027% ( 7) 00:08:22.966 26617.698 - 26819.348: 99.9514% ( 8) 00:08:22.966 26819.348 - 27020.997: 100.0000% ( 8) 00:08:22.966 00:08:22.966 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:22.966 ============================================================================== 00:08:22.966 Range in us Cumulative IO count 00:08:22.966 3302.006 - 3327.212: 0.0061% ( 1) 00:08:22.966 3327.212 - 3352.418: 0.0304% ( 4) 00:08:22.966 3352.418 - 3377.625: 0.0426% ( 2) 00:08:22.966 3377.625 - 3402.831: 0.0547% ( 2) 00:08:22.966 3402.831 - 3428.037: 0.0669% ( 2) 00:08:22.966 3428.037 - 3453.243: 0.0790% ( 2) 00:08:22.966 3453.243 - 3478.449: 0.0973% ( 3) 00:08:22.966 3478.449 - 3503.655: 0.1094% ( 2) 00:08:22.966 3503.655 - 3528.862: 0.1277% ( 3) 00:08:22.966 3528.862 - 3554.068: 0.1398% ( 2) 00:08:22.966 3554.068 - 3579.274: 0.1520% ( 2) 00:08:22.966 3579.274 - 3604.480: 0.1702% ( 3) 00:08:22.966 3604.480 - 3629.686: 0.1824% ( 2) 00:08:22.966 3629.686 - 3654.892: 0.1946% ( 2) 00:08:22.966 3654.892 - 3680.098: 0.2128% ( 3) 00:08:22.966 3680.098 - 3705.305: 0.2250% ( 2) 00:08:22.966 3705.305 - 3730.511: 0.2371% ( 2) 00:08:22.966 3730.511 - 3755.717: 0.2493% ( 2) 00:08:22.966 3755.717 - 3780.923: 0.2675% ( 3) 00:08:22.966 3780.923 - 3806.129: 0.2797% ( 2) 00:08:22.966 3806.129 - 3831.335: 0.2918% ( 2) 00:08:22.966 3831.335 - 3856.542: 0.3040% ( 2) 00:08:22.966 3856.542 - 3881.748: 0.3161% ( 2) 00:08:22.966 3881.748 - 3906.954: 0.3283% ( 2) 00:08:22.966 3906.954 - 3932.160: 0.3405% ( 2) 00:08:22.966 3932.160 - 3957.366: 0.3526% ( 2) 00:08:22.966 3957.366 - 3982.572: 0.3648% ( 2) 00:08:22.966 3982.572 - 4007.778: 0.3769% ( 2) 00:08:22.966 4007.778 - 4032.985: 0.3891% ( 2) 00:08:22.966 5293.292 - 5318.498: 0.4073% ( 3) 00:08:22.966 5318.498 - 5343.705: 0.4195% ( 2) 00:08:22.966 5343.705 - 5368.911: 0.4317% ( 2) 00:08:22.966 5368.911 - 5394.117: 0.4377% ( 1) 00:08:22.966 5394.117 - 5419.323: 0.4499% ( 2) 00:08:22.966 5419.323 - 5444.529: 0.4742% ( 4) 00:08:22.966 5444.529 - 5469.735: 0.4864% ( 2) 00:08:22.966 5469.735 - 5494.942: 0.4985% ( 2) 00:08:22.966 5494.942 - 5520.148: 0.5229% ( 4) 00:08:22.966 5520.148 - 5545.354: 0.5350% ( 2) 00:08:22.966 5545.354 - 5570.560: 0.5472% ( 2) 00:08:22.966 5570.560 - 5595.766: 0.5654% ( 3) 00:08:22.966 5595.766 - 5620.972: 0.5776% ( 2) 00:08:22.966 5620.972 - 5646.178: 0.5958% ( 3) 00:08:22.966 5646.178 - 5671.385: 0.6080% ( 2) 00:08:22.966 5671.385 - 5696.591: 0.6262% ( 3) 00:08:22.966 5696.591 - 5721.797: 0.6384% ( 2) 00:08:22.966 5721.797 - 5747.003: 0.6505% ( 2) 00:08:22.966 5747.003 - 5772.209: 0.6688% ( 3) 00:08:22.966 5772.209 - 5797.415: 0.6809% ( 2) 00:08:22.966 5797.415 - 5822.622: 0.6992% ( 3) 00:08:22.966 5822.622 - 5847.828: 0.7113% ( 2) 00:08:22.966 5847.828 - 5873.034: 0.7296% ( 3) 00:08:22.966 5873.034 - 5898.240: 0.7417% ( 2) 00:08:22.966 5898.240 - 5923.446: 0.7539% ( 2) 00:08:22.966 5923.446 - 5948.652: 0.7661% ( 2) 00:08:22.966 5948.652 - 5973.858: 0.7782% ( 2) 00:08:22.966 6276.332 - 6301.538: 0.7843% ( 1) 00:08:22.966 6301.538 - 6326.745: 0.8025% ( 3) 00:08:22.966 6326.745 - 6351.951: 0.8147% ( 2) 00:08:22.966 6351.951 - 6377.157: 0.8268% ( 2) 00:08:22.966 6377.157 - 6402.363: 0.8390% ( 2) 00:08:22.967 6402.363 - 6427.569: 0.8512% ( 2) 00:08:22.967 6427.569 - 6452.775: 0.8816% ( 5) 00:08:22.967 6452.775 - 6503.188: 0.9971% ( 19) 00:08:22.967 6503.188 - 6553.600: 1.1612% ( 27) 00:08:22.967 6553.600 - 6604.012: 1.3619% ( 33) 00:08:22.967 6604.012 - 6654.425: 1.6841% ( 53) 00:08:22.967 6654.425 - 6704.837: 2.0793% ( 65) 00:08:22.967 6704.837 - 6755.249: 2.6143% ( 88) 00:08:22.967 6755.249 - 6805.662: 3.3986% ( 129) 00:08:22.967 6805.662 - 6856.074: 4.2558% ( 141) 00:08:22.967 6856.074 - 6906.486: 5.2772% ( 168) 00:08:22.967 6906.486 - 6956.898: 6.5054% ( 202) 00:08:22.967 6956.898 - 7007.311: 8.1955% ( 278) 00:08:22.967 7007.311 - 7057.723: 10.5241% ( 383) 00:08:22.967 7057.723 - 7108.135: 13.3268% ( 461) 00:08:22.967 7108.135 - 7158.548: 16.5552% ( 531) 00:08:22.967 7158.548 - 7208.960: 20.2699% ( 611) 00:08:22.967 7208.960 - 7259.372: 24.6899% ( 727) 00:08:22.967 7259.372 - 7309.785: 29.5477% ( 799) 00:08:22.967 7309.785 - 7360.197: 34.5939% ( 830) 00:08:22.967 7360.197 - 7410.609: 39.5367% ( 813) 00:08:22.967 7410.609 - 7461.022: 44.6316% ( 838) 00:08:22.967 7461.022 - 7511.434: 49.6778% ( 830) 00:08:22.967 7511.434 - 7561.846: 54.6024% ( 810) 00:08:22.967 7561.846 - 7612.258: 59.4601% ( 799) 00:08:22.967 7612.258 - 7662.671: 64.1415% ( 770) 00:08:22.967 7662.671 - 7713.083: 68.6284% ( 738) 00:08:22.967 7713.083 - 7763.495: 73.0362% ( 725) 00:08:22.967 7763.495 - 7813.908: 77.1826% ( 682) 00:08:22.967 7813.908 - 7864.320: 80.8548% ( 604) 00:08:22.967 7864.320 - 7914.732: 83.9373% ( 507) 00:08:22.967 7914.732 - 7965.145: 86.5090% ( 423) 00:08:22.967 7965.145 - 8015.557: 88.5214% ( 331) 00:08:22.967 8015.557 - 8065.969: 90.0535% ( 252) 00:08:22.967 8065.969 - 8116.382: 91.1904% ( 187) 00:08:22.967 8116.382 - 8166.794: 91.9018% ( 117) 00:08:22.967 8166.794 - 8217.206: 92.3638% ( 76) 00:08:22.967 8217.206 - 8267.618: 92.7347% ( 61) 00:08:22.967 8267.618 - 8318.031: 93.0630% ( 54) 00:08:22.967 8318.031 - 8368.443: 93.3427% ( 46) 00:08:22.967 8368.443 - 8418.855: 93.4946% ( 25) 00:08:22.967 8418.855 - 8469.268: 93.5919% ( 16) 00:08:22.967 8469.268 - 8519.680: 93.6831% ( 15) 00:08:22.967 8519.680 - 8570.092: 93.7561% ( 12) 00:08:22.967 8570.092 - 8620.505: 93.8108% ( 9) 00:08:22.967 8620.505 - 8670.917: 93.8959% ( 14) 00:08:22.967 8670.917 - 8721.329: 93.9689% ( 12) 00:08:22.967 8721.329 - 8771.742: 94.0418% ( 12) 00:08:22.967 8771.742 - 8822.154: 94.1391% ( 16) 00:08:22.967 8822.154 - 8872.566: 94.2060% ( 11) 00:08:22.967 8872.566 - 8922.978: 94.2911% ( 14) 00:08:22.967 8922.978 - 8973.391: 94.3945% ( 17) 00:08:22.967 8973.391 - 9023.803: 94.4857% ( 15) 00:08:22.967 9023.803 - 9074.215: 94.5829% ( 16) 00:08:22.967 9074.215 - 9124.628: 94.6620% ( 13) 00:08:22.967 9124.628 - 9175.040: 94.7471% ( 14) 00:08:22.967 9175.040 - 9225.452: 94.8079% ( 10) 00:08:22.967 9225.452 - 9275.865: 94.8991% ( 15) 00:08:22.967 9275.865 - 9326.277: 95.0024% ( 17) 00:08:22.967 9326.277 - 9376.689: 95.1179% ( 19) 00:08:22.967 9376.689 - 9427.102: 95.2395% ( 20) 00:08:22.967 9427.102 - 9477.514: 95.3429% ( 17) 00:08:22.967 9477.514 - 9527.926: 95.4402% ( 16) 00:08:22.967 9527.926 - 9578.338: 95.5375% ( 16) 00:08:22.967 9578.338 - 9628.751: 95.6226% ( 14) 00:08:22.967 9628.751 - 9679.163: 95.6955% ( 12) 00:08:22.967 9679.163 - 9729.575: 95.7624% ( 11) 00:08:22.967 9729.575 - 9779.988: 95.8110% ( 8) 00:08:22.967 9779.988 - 9830.400: 95.8597% ( 8) 00:08:22.967 9830.400 - 9880.812: 95.9022% ( 7) 00:08:22.967 9880.812 - 9931.225: 95.9509% ( 8) 00:08:22.967 9931.225 - 9981.637: 95.9995% ( 8) 00:08:22.967 9981.637 - 10032.049: 96.0421% ( 7) 00:08:22.967 10032.049 - 10082.462: 96.0846% ( 7) 00:08:22.967 10082.462 - 10132.874: 96.1029% ( 3) 00:08:22.967 10132.874 - 10183.286: 96.1089% ( 1) 00:08:22.967 10435.348 - 10485.760: 96.1272% ( 3) 00:08:22.967 10485.760 - 10536.172: 96.1576% ( 5) 00:08:22.967 10536.172 - 10586.585: 96.1758% ( 3) 00:08:22.967 10586.585 - 10636.997: 96.1941% ( 3) 00:08:22.967 10636.997 - 10687.409: 96.2366% ( 7) 00:08:22.967 10687.409 - 10737.822: 96.2731% ( 6) 00:08:22.967 10737.822 - 10788.234: 96.3217% ( 8) 00:08:22.967 10788.234 - 10838.646: 96.3582% ( 6) 00:08:22.967 10838.646 - 10889.058: 96.4008% ( 7) 00:08:22.967 10889.058 - 10939.471: 96.4373% ( 6) 00:08:22.967 10939.471 - 10989.883: 96.5041% ( 11) 00:08:22.967 10989.883 - 11040.295: 96.6014% ( 16) 00:08:22.967 11040.295 - 11090.708: 96.6804% ( 13) 00:08:22.967 11090.708 - 11141.120: 96.7473% ( 11) 00:08:22.967 11141.120 - 11191.532: 96.8203% ( 12) 00:08:22.967 11191.532 - 11241.945: 96.8993% ( 13) 00:08:22.967 11241.945 - 11292.357: 96.9844% ( 14) 00:08:22.967 11292.357 - 11342.769: 97.0635% ( 13) 00:08:22.967 11342.769 - 11393.182: 97.1425% ( 13) 00:08:22.967 11393.182 - 11443.594: 97.2155% ( 12) 00:08:22.967 11443.594 - 11494.006: 97.3067% ( 15) 00:08:22.967 11494.006 - 11544.418: 97.3675% ( 10) 00:08:22.967 11544.418 - 11594.831: 97.4283% ( 10) 00:08:22.967 11594.831 - 11645.243: 97.4830% ( 9) 00:08:22.967 11645.243 - 11695.655: 97.5499% ( 11) 00:08:22.967 11695.655 - 11746.068: 97.5985% ( 8) 00:08:22.967 11746.068 - 11796.480: 97.6714% ( 12) 00:08:22.967 11796.480 - 11846.892: 97.7383% ( 11) 00:08:22.967 11846.892 - 11897.305: 97.7870% ( 8) 00:08:22.967 11897.305 - 11947.717: 97.8478% ( 10) 00:08:22.967 11947.717 - 11998.129: 97.9025% ( 9) 00:08:22.967 11998.129 - 12048.542: 97.9633% ( 10) 00:08:22.967 12048.542 - 12098.954: 98.0241% ( 10) 00:08:22.967 12098.954 - 12149.366: 98.0970% ( 12) 00:08:22.967 12149.366 - 12199.778: 98.1700% ( 12) 00:08:22.967 12199.778 - 12250.191: 98.2429% ( 12) 00:08:22.967 12250.191 - 12300.603: 98.3159% ( 12) 00:08:22.967 12300.603 - 12351.015: 98.3767% ( 10) 00:08:22.967 12351.015 - 12401.428: 98.4375% ( 10) 00:08:22.967 12401.428 - 12451.840: 98.5105% ( 12) 00:08:22.967 12451.840 - 12502.252: 98.5530% ( 7) 00:08:22.967 12502.252 - 12552.665: 98.6017% ( 8) 00:08:22.967 12552.665 - 12603.077: 98.6442% ( 7) 00:08:22.967 12603.077 - 12653.489: 98.6989% ( 9) 00:08:22.967 12653.489 - 12703.902: 98.7415% ( 7) 00:08:22.967 12703.902 - 12754.314: 98.7780% ( 6) 00:08:22.967 12754.314 - 12804.726: 98.8205% ( 7) 00:08:22.967 12804.726 - 12855.138: 98.8631% ( 7) 00:08:22.967 12855.138 - 12905.551: 98.8996% ( 6) 00:08:22.967 12905.551 - 13006.375: 98.9847% ( 14) 00:08:22.967 13006.375 - 13107.200: 99.0394% ( 9) 00:08:22.967 13107.200 - 13208.025: 99.0820% ( 7) 00:08:22.967 13208.025 - 13308.849: 99.1245% ( 7) 00:08:22.967 13308.849 - 13409.674: 99.1671% ( 7) 00:08:22.967 13409.674 - 13510.498: 99.2096% ( 7) 00:08:22.967 13510.498 - 13611.323: 99.2218% ( 2) 00:08:22.967 17241.009 - 17341.834: 99.2339% ( 2) 00:08:22.967 17341.834 - 17442.658: 99.2583% ( 4) 00:08:22.967 17442.658 - 17543.483: 99.2826% ( 4) 00:08:22.967 17543.483 - 17644.308: 99.3069% ( 4) 00:08:22.967 17644.308 - 17745.132: 99.3312% ( 4) 00:08:22.967 17745.132 - 17845.957: 99.3495% ( 3) 00:08:22.967 17845.957 - 17946.782: 99.3738% ( 4) 00:08:22.967 17946.782 - 18047.606: 99.3981% ( 4) 00:08:22.967 18047.606 - 18148.431: 99.4224% ( 4) 00:08:22.967 18148.431 - 18249.255: 99.4467% ( 4) 00:08:22.967 18249.255 - 18350.080: 99.4650% ( 3) 00:08:22.967 18350.080 - 18450.905: 99.4893% ( 4) 00:08:22.967 18450.905 - 18551.729: 99.5136% ( 4) 00:08:22.967 18551.729 - 18652.554: 99.5319% ( 3) 00:08:22.967 18652.554 - 18753.378: 99.5562% ( 4) 00:08:22.967 18753.378 - 18854.203: 99.5805% ( 4) 00:08:22.967 18854.203 - 18955.028: 99.5987% ( 3) 00:08:22.967 18955.028 - 19055.852: 99.6109% ( 2) 00:08:22.967 25407.803 - 25508.628: 99.6291% ( 3) 00:08:22.967 25508.628 - 25609.452: 99.6474% ( 3) 00:08:22.967 25609.452 - 25710.277: 99.6717% ( 4) 00:08:22.967 25710.277 - 25811.102: 99.6899% ( 3) 00:08:22.967 25811.102 - 26012.751: 99.7325% ( 7) 00:08:22.967 26012.751 - 26214.400: 99.7811% ( 8) 00:08:22.967 26214.400 - 26416.049: 99.8298% ( 8) 00:08:22.967 26416.049 - 26617.698: 99.9088% ( 13) 00:08:22.967 26617.698 - 26819.348: 99.9939% ( 14) 00:08:22.967 26819.348 - 27020.997: 100.0000% ( 1) 00:08:22.967 00:08:22.967 22:04:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:23.913 Initializing NVMe Controllers 00:08:23.913 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.913 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.913 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.913 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.913 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.913 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.913 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.913 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.913 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.913 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.913 Initialization complete. Launching workers. 00:08:23.914 ======================================================== 00:08:23.914 Latency(us) 00:08:23.914 Device Information : IOPS MiB/s Average min max 00:08:23.914 PCIE (0000:00:10.0) NSID 1 from core 0: 15929.45 186.67 8037.36 5213.68 25255.55 00:08:23.914 PCIE (0000:00:11.0) NSID 1 from core 0: 15929.45 186.67 8030.67 4937.27 24662.43 00:08:23.914 PCIE (0000:00:13.0) NSID 1 from core 0: 15929.45 186.67 8024.27 4336.22 24269.01 00:08:23.914 PCIE (0000:00:12.0) NSID 1 from core 0: 15929.45 186.67 8017.72 3939.06 23968.70 00:08:23.914 PCIE (0000:00:12.0) NSID 2 from core 0: 15929.45 186.67 8010.83 3592.01 23771.91 00:08:23.914 PCIE (0000:00:12.0) NSID 3 from core 0: 15929.45 186.67 8004.05 3321.33 23503.94 00:08:23.914 ======================================================== 00:08:23.914 Total : 95576.72 1120.04 8020.82 3321.33 25255.55 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 6856.074us 00:08:23.914 10.00000% : 7158.548us 00:08:23.914 25.00000% : 7410.609us 00:08:23.914 50.00000% : 7713.083us 00:08:23.914 75.00000% : 8116.382us 00:08:23.914 90.00000% : 9074.215us 00:08:23.914 95.00000% : 10132.874us 00:08:23.914 98.00000% : 12703.902us 00:08:23.914 99.00000% : 14014.622us 00:08:23.914 99.50000% : 18551.729us 00:08:23.914 99.90000% : 24903.680us 00:08:23.914 99.99000% : 25306.978us 00:08:23.914 99.99900% : 25306.978us 00:08:23.914 99.99990% : 25306.978us 00:08:23.914 99.99999% : 25306.978us 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 7007.311us 00:08:23.914 10.00000% : 7309.785us 00:08:23.914 25.00000% : 7461.022us 00:08:23.914 50.00000% : 7662.671us 00:08:23.914 75.00000% : 8015.557us 00:08:23.914 90.00000% : 9074.215us 00:08:23.914 95.00000% : 9880.812us 00:08:23.914 98.00000% : 12855.138us 00:08:23.914 99.00000% : 14115.446us 00:08:23.914 99.50000% : 18450.905us 00:08:23.914 99.90000% : 24399.557us 00:08:23.914 99.99000% : 24702.031us 00:08:23.914 99.99900% : 24702.031us 00:08:23.914 99.99990% : 24702.031us 00:08:23.914 99.99999% : 24702.031us 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 6956.898us 00:08:23.914 10.00000% : 7259.372us 00:08:23.914 25.00000% : 7461.022us 00:08:23.914 50.00000% : 7662.671us 00:08:23.914 75.00000% : 8015.557us 00:08:23.914 90.00000% : 9023.803us 00:08:23.914 95.00000% : 10132.874us 00:08:23.914 98.00000% : 13107.200us 00:08:23.914 99.00000% : 14115.446us 00:08:23.914 99.50000% : 18854.203us 00:08:23.914 99.90000% : 23996.258us 00:08:23.914 99.99000% : 24298.732us 00:08:23.914 99.99900% : 24298.732us 00:08:23.914 99.99990% : 24298.732us 00:08:23.914 99.99999% : 24298.732us 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 6906.486us 00:08:23.914 10.00000% : 7309.785us 00:08:23.914 25.00000% : 7461.022us 00:08:23.914 50.00000% : 7662.671us 00:08:23.914 75.00000% : 8015.557us 00:08:23.914 90.00000% : 8922.978us 00:08:23.914 95.00000% : 10233.698us 00:08:23.914 98.00000% : 12905.551us 00:08:23.914 99.00000% : 14417.920us 00:08:23.914 99.50000% : 18551.729us 00:08:23.914 99.90000% : 23592.960us 00:08:23.914 99.99000% : 23996.258us 00:08:23.914 99.99900% : 23996.258us 00:08:23.914 99.99990% : 23996.258us 00:08:23.914 99.99999% : 23996.258us 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 6906.486us 00:08:23.914 10.00000% : 7259.372us 00:08:23.914 25.00000% : 7461.022us 00:08:23.914 50.00000% : 7662.671us 00:08:23.914 75.00000% : 8015.557us 00:08:23.914 90.00000% : 8973.391us 00:08:23.914 95.00000% : 10233.698us 00:08:23.914 98.00000% : 12552.665us 00:08:23.914 99.00000% : 14317.095us 00:08:23.914 99.50000% : 18249.255us 00:08:23.914 99.90000% : 23391.311us 00:08:23.914 99.99000% : 23794.609us 00:08:23.914 99.99900% : 23794.609us 00:08:23.914 99.99990% : 23794.609us 00:08:23.914 99.99999% : 23794.609us 00:08:23.914 00:08:23.914 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.914 ================================================================================= 00:08:23.914 1.00000% : 6906.486us 00:08:23.914 10.00000% : 7309.785us 00:08:23.914 25.00000% : 7461.022us 00:08:23.914 50.00000% : 7662.671us 00:08:23.914 75.00000% : 8015.557us 00:08:23.914 90.00000% : 9074.215us 00:08:23.914 95.00000% : 10082.462us 00:08:23.914 98.00000% : 12754.314us 00:08:23.914 99.00000% : 14518.745us 00:08:23.914 99.50000% : 17946.782us 00:08:23.914 99.90000% : 23189.662us 00:08:23.914 99.99000% : 23492.135us 00:08:23.914 99.99900% : 23592.960us 00:08:23.914 99.99990% : 23592.960us 00:08:23.914 99.99999% : 23592.960us 00:08:23.914 00:08:23.914 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.914 ============================================================================== 00:08:23.914 Range in us Cumulative IO count 00:08:23.914 5192.468 - 5217.674: 0.0188% ( 3) 00:08:23.914 5217.674 - 5242.880: 0.0628% ( 7) 00:08:23.914 5242.880 - 5268.086: 0.0753% ( 2) 00:08:23.914 5268.086 - 5293.292: 0.0816% ( 1) 00:08:23.914 5293.292 - 5318.498: 0.0879% ( 1) 00:08:23.914 5318.498 - 5343.705: 0.1192% ( 5) 00:08:23.914 5343.705 - 5368.911: 0.1318% ( 2) 00:08:23.914 5368.911 - 5394.117: 0.1632% ( 5) 00:08:23.914 5394.117 - 5419.323: 0.1694% ( 1) 00:08:23.914 5419.323 - 5444.529: 0.2008% ( 5) 00:08:23.914 5444.529 - 5469.735: 0.2134% ( 2) 00:08:23.914 5469.735 - 5494.942: 0.2196% ( 1) 00:08:23.914 5494.942 - 5520.148: 0.2259% ( 1) 00:08:23.914 5570.560 - 5595.766: 0.2385% ( 2) 00:08:23.914 5595.766 - 5620.972: 0.2510% ( 2) 00:08:23.914 5620.972 - 5646.178: 0.2698% ( 3) 00:08:23.914 5646.178 - 5671.385: 0.2824% ( 2) 00:08:23.914 5671.385 - 5696.591: 0.2949% ( 2) 00:08:23.914 5696.591 - 5721.797: 0.3075% ( 2) 00:08:23.914 5721.797 - 5747.003: 0.3200% ( 2) 00:08:23.914 5747.003 - 5772.209: 0.3326% ( 2) 00:08:23.914 5772.209 - 5797.415: 0.3514% ( 3) 00:08:23.914 5797.415 - 5822.622: 0.3577% ( 1) 00:08:23.914 5822.622 - 5847.828: 0.3702% ( 2) 00:08:23.914 5847.828 - 5873.034: 0.3891% ( 3) 00:08:23.914 5873.034 - 5898.240: 0.4016% ( 2) 00:08:23.914 6604.012 - 6654.425: 0.4142% ( 2) 00:08:23.914 6654.425 - 6704.837: 0.4393% ( 4) 00:08:23.914 6704.837 - 6755.249: 0.4895% ( 8) 00:08:23.914 6755.249 - 6805.662: 0.7907% ( 48) 00:08:23.914 6805.662 - 6856.074: 1.1232% ( 53) 00:08:23.914 6856.074 - 6906.486: 1.6253% ( 80) 00:08:23.914 6906.486 - 6956.898: 2.2277% ( 96) 00:08:23.914 6956.898 - 7007.311: 3.1752% ( 151) 00:08:23.914 7007.311 - 7057.723: 4.5996% ( 227) 00:08:23.914 7057.723 - 7108.135: 6.7457% ( 342) 00:08:23.914 7108.135 - 7158.548: 10.1343% ( 540) 00:08:23.914 7158.548 - 7208.960: 13.7676% ( 579) 00:08:23.914 7208.960 - 7259.372: 17.2754% ( 559) 00:08:23.914 7259.372 - 7309.785: 20.9212% ( 581) 00:08:23.914 7309.785 - 7360.197: 24.8431% ( 625) 00:08:23.914 7360.197 - 7410.609: 28.9847% ( 660) 00:08:23.914 7410.609 - 7461.022: 32.9192% ( 627) 00:08:23.914 7461.022 - 7511.434: 37.1988% ( 682) 00:08:23.914 7511.434 - 7561.846: 41.3592% ( 663) 00:08:23.914 7561.846 - 7612.258: 45.2748% ( 624) 00:08:23.914 7612.258 - 7662.671: 48.9646% ( 588) 00:08:23.914 7662.671 - 7713.083: 52.4787% ( 560) 00:08:23.914 7713.083 - 7763.495: 55.9551% ( 554) 00:08:23.914 7763.495 - 7813.908: 59.2181% ( 520) 00:08:23.914 7813.908 - 7864.320: 62.6004% ( 539) 00:08:23.914 7864.320 - 7914.732: 66.0705% ( 553) 00:08:23.914 7914.732 - 7965.145: 69.4654% ( 541) 00:08:23.914 7965.145 - 8015.557: 72.2452% ( 443) 00:08:23.914 8015.557 - 8065.969: 74.6047% ( 376) 00:08:23.914 8065.969 - 8116.382: 76.5688% ( 313) 00:08:23.914 8116.382 - 8166.794: 78.3321% ( 281) 00:08:23.914 8166.794 - 8217.206: 79.9260% ( 254) 00:08:23.914 8217.206 - 8267.618: 81.1057% ( 188) 00:08:23.914 8267.618 - 8318.031: 82.1411% ( 165) 00:08:23.914 8318.031 - 8368.443: 83.0447% ( 144) 00:08:23.914 8368.443 - 8418.855: 83.7475% ( 112) 00:08:23.914 8418.855 - 8469.268: 84.3813% ( 101) 00:08:23.914 8469.268 - 8519.680: 85.0088% ( 100) 00:08:23.914 8519.680 - 8570.092: 85.5986% ( 94) 00:08:23.914 8570.092 - 8620.505: 86.3140% ( 114) 00:08:23.914 8620.505 - 8670.917: 86.8725% ( 89) 00:08:23.914 8670.917 - 8721.329: 87.3243% ( 72) 00:08:23.914 8721.329 - 8771.742: 87.7636% ( 70) 00:08:23.914 8771.742 - 8822.154: 88.2530% ( 78) 00:08:23.914 8822.154 - 8872.566: 88.6421% ( 62) 00:08:23.915 8872.566 - 8922.978: 88.9746% ( 53) 00:08:23.915 8922.978 - 8973.391: 89.4955% ( 83) 00:08:23.915 8973.391 - 9023.803: 89.8092% ( 50) 00:08:23.915 9023.803 - 9074.215: 90.2297% ( 67) 00:08:23.915 9074.215 - 9124.628: 90.5183% ( 46) 00:08:23.915 9124.628 - 9175.040: 90.9262% ( 65) 00:08:23.915 9175.040 - 9225.452: 91.2274% ( 48) 00:08:23.915 9225.452 - 9275.865: 91.5537% ( 52) 00:08:23.915 9275.865 - 9326.277: 91.9302% ( 60) 00:08:23.915 9326.277 - 9376.689: 92.2314% ( 48) 00:08:23.915 9376.689 - 9427.102: 92.4950% ( 42) 00:08:23.915 9427.102 - 9477.514: 92.7460% ( 40) 00:08:23.915 9477.514 - 9527.926: 93.0221% ( 44) 00:08:23.915 9527.926 - 9578.338: 93.2731% ( 40) 00:08:23.915 9578.338 - 9628.751: 93.5304% ( 41) 00:08:23.915 9628.751 - 9679.163: 93.7437% ( 34) 00:08:23.915 9679.163 - 9729.575: 93.9257% ( 29) 00:08:23.915 9729.575 - 9779.988: 94.1140% ( 30) 00:08:23.915 9779.988 - 9830.400: 94.2959% ( 29) 00:08:23.915 9830.400 - 9880.812: 94.4528% ( 25) 00:08:23.915 9880.812 - 9931.225: 94.6160% ( 26) 00:08:23.915 9931.225 - 9981.637: 94.7603% ( 23) 00:08:23.915 9981.637 - 10032.049: 94.8670% ( 17) 00:08:23.915 10032.049 - 10082.462: 94.9674% ( 16) 00:08:23.915 10082.462 - 10132.874: 95.1242% ( 25) 00:08:23.915 10132.874 - 10183.286: 95.2748% ( 24) 00:08:23.915 10183.286 - 10233.698: 95.3439% ( 11) 00:08:23.915 10233.698 - 10284.111: 95.4255% ( 13) 00:08:23.915 10284.111 - 10334.523: 95.5070% ( 13) 00:08:23.915 10334.523 - 10384.935: 95.5886% ( 13) 00:08:23.915 10384.935 - 10435.348: 95.7141% ( 20) 00:08:23.915 10435.348 - 10485.760: 95.8145% ( 16) 00:08:23.915 10485.760 - 10536.172: 95.8835% ( 11) 00:08:23.915 10536.172 - 10586.585: 95.9337% ( 8) 00:08:23.915 10586.585 - 10636.997: 95.9902% ( 9) 00:08:23.915 10636.997 - 10687.409: 96.0592% ( 11) 00:08:23.915 10687.409 - 10737.822: 96.1032% ( 7) 00:08:23.915 10737.822 - 10788.234: 96.1659% ( 10) 00:08:23.915 10788.234 - 10838.646: 96.2287% ( 10) 00:08:23.915 10838.646 - 10889.058: 96.2977% ( 11) 00:08:23.915 10889.058 - 10939.471: 96.3416% ( 7) 00:08:23.915 10939.471 - 10989.883: 96.4232% ( 13) 00:08:23.915 10989.883 - 11040.295: 96.4608% ( 6) 00:08:23.915 11040.295 - 11090.708: 96.4922% ( 5) 00:08:23.915 11090.708 - 11141.120: 96.5299% ( 6) 00:08:23.915 11141.120 - 11191.532: 96.5487% ( 3) 00:08:23.915 11191.532 - 11241.945: 96.5738% ( 4) 00:08:23.915 11241.945 - 11292.357: 96.6052% ( 5) 00:08:23.915 11292.357 - 11342.769: 96.6365% ( 5) 00:08:23.915 11342.769 - 11393.182: 96.6616% ( 4) 00:08:23.915 11393.182 - 11443.594: 96.6867% ( 4) 00:08:23.915 11443.594 - 11494.006: 96.6993% ( 2) 00:08:23.915 11494.006 - 11544.418: 96.7118% ( 2) 00:08:23.915 11544.418 - 11594.831: 96.7244% ( 2) 00:08:23.915 11594.831 - 11645.243: 96.7432% ( 3) 00:08:23.915 11645.243 - 11695.655: 96.7871% ( 7) 00:08:23.915 11695.655 - 11746.068: 96.8248% ( 6) 00:08:23.915 11746.068 - 11796.480: 96.8373% ( 2) 00:08:23.915 11796.480 - 11846.892: 96.9189% ( 13) 00:08:23.915 11846.892 - 11897.305: 97.0005% ( 13) 00:08:23.915 11897.305 - 11947.717: 97.0382% ( 6) 00:08:23.915 11947.717 - 11998.129: 97.0884% ( 8) 00:08:23.915 11998.129 - 12048.542: 97.1323% ( 7) 00:08:23.915 12048.542 - 12098.954: 97.2201% ( 14) 00:08:23.915 12098.954 - 12149.366: 97.3017% ( 13) 00:08:23.915 12149.366 - 12199.778: 97.3394% ( 6) 00:08:23.915 12199.778 - 12250.191: 97.3896% ( 8) 00:08:23.915 12250.191 - 12300.603: 97.4398% ( 8) 00:08:23.915 12300.603 - 12351.015: 97.4774% ( 6) 00:08:23.915 12351.015 - 12401.428: 97.5402% ( 10) 00:08:23.915 12401.428 - 12451.840: 97.6029% ( 10) 00:08:23.915 12451.840 - 12502.252: 97.6657% ( 10) 00:08:23.915 12502.252 - 12552.665: 97.7410% ( 12) 00:08:23.915 12552.665 - 12603.077: 97.8100% ( 11) 00:08:23.915 12603.077 - 12653.489: 97.9292% ( 19) 00:08:23.915 12653.489 - 12703.902: 98.0171% ( 14) 00:08:23.915 12703.902 - 12754.314: 98.0986% ( 13) 00:08:23.915 12754.314 - 12804.726: 98.1488% ( 8) 00:08:23.915 12804.726 - 12855.138: 98.2304% ( 13) 00:08:23.915 12855.138 - 12905.551: 98.2994% ( 11) 00:08:23.915 12905.551 - 13006.375: 98.4124% ( 18) 00:08:23.915 13006.375 - 13107.200: 98.5316% ( 19) 00:08:23.915 13107.200 - 13208.025: 98.5818% ( 8) 00:08:23.915 13208.025 - 13308.849: 98.6571% ( 12) 00:08:23.915 13308.849 - 13409.674: 98.7575% ( 16) 00:08:23.915 13409.674 - 13510.498: 98.8328% ( 12) 00:08:23.915 13510.498 - 13611.323: 98.8830% ( 8) 00:08:23.915 13611.323 - 13712.148: 98.9207% ( 6) 00:08:23.915 13712.148 - 13812.972: 98.9458% ( 4) 00:08:23.915 13812.972 - 13913.797: 98.9709% ( 4) 00:08:23.915 13913.797 - 14014.622: 99.0085% ( 6) 00:08:23.915 14014.622 - 14115.446: 99.0399% ( 5) 00:08:23.915 14115.446 - 14216.271: 99.0587% ( 3) 00:08:23.915 14216.271 - 14317.095: 99.0776% ( 3) 00:08:23.915 14317.095 - 14417.920: 99.0901% ( 2) 00:08:23.915 14417.920 - 14518.745: 99.1027% ( 2) 00:08:23.915 14518.745 - 14619.569: 99.1215% ( 3) 00:08:23.915 14619.569 - 14720.394: 99.1403% ( 3) 00:08:23.915 14720.394 - 14821.218: 99.1591% ( 3) 00:08:23.915 14821.218 - 14922.043: 99.1842% ( 4) 00:08:23.915 14922.043 - 15022.868: 99.1968% ( 2) 00:08:23.915 17140.185 - 17241.009: 99.2031% ( 1) 00:08:23.915 17241.009 - 17341.834: 99.2219% ( 3) 00:08:23.915 17341.834 - 17442.658: 99.2470% ( 4) 00:08:23.915 17442.658 - 17543.483: 99.3286% ( 13) 00:08:23.915 17543.483 - 17644.308: 99.3850% ( 9) 00:08:23.915 17644.308 - 17745.132: 99.4101% ( 4) 00:08:23.915 17745.132 - 17845.957: 99.4290% ( 3) 00:08:23.915 17845.957 - 17946.782: 99.4352% ( 1) 00:08:23.915 17946.782 - 18047.606: 99.4478% ( 2) 00:08:23.915 18047.606 - 18148.431: 99.4666% ( 3) 00:08:23.915 18148.431 - 18249.255: 99.4729% ( 1) 00:08:23.915 18249.255 - 18350.080: 99.4792% ( 1) 00:08:23.915 18350.080 - 18450.905: 99.4980% ( 3) 00:08:23.915 18450.905 - 18551.729: 99.5168% ( 3) 00:08:23.915 18551.729 - 18652.554: 99.5356% ( 3) 00:08:23.915 18652.554 - 18753.378: 99.5607% ( 4) 00:08:23.915 18753.378 - 18854.203: 99.5796% ( 3) 00:08:23.915 18854.203 - 18955.028: 99.5984% ( 3) 00:08:23.915 23492.135 - 23592.960: 99.6172% ( 3) 00:08:23.915 23592.960 - 23693.785: 99.6423% ( 4) 00:08:23.915 23693.785 - 23794.609: 99.6674% ( 4) 00:08:23.915 23794.609 - 23895.434: 99.6862% ( 3) 00:08:23.915 23996.258 - 24097.083: 99.6925% ( 1) 00:08:23.915 24097.083 - 24197.908: 99.7239% ( 5) 00:08:23.915 24197.908 - 24298.732: 99.7490% ( 4) 00:08:23.915 24298.732 - 24399.557: 99.7804% ( 5) 00:08:23.915 24399.557 - 24500.382: 99.8055% ( 4) 00:08:23.915 24500.382 - 24601.206: 99.8306% ( 4) 00:08:23.915 24601.206 - 24702.031: 99.8619% ( 5) 00:08:23.915 24702.031 - 24802.855: 99.8870% ( 4) 00:08:23.915 24802.855 - 24903.680: 99.9121% ( 4) 00:08:23.915 24903.680 - 25004.505: 99.9372% ( 4) 00:08:23.915 25004.505 - 25105.329: 99.9686% ( 5) 00:08:23.915 25105.329 - 25206.154: 99.9874% ( 3) 00:08:23.915 25206.154 - 25306.978: 100.0000% ( 2) 00:08:23.915 00:08:23.915 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.915 ============================================================================== 00:08:23.915 Range in us Cumulative IO count 00:08:23.915 4915.200 - 4940.406: 0.0063% ( 1) 00:08:23.915 4940.406 - 4965.612: 0.0188% ( 2) 00:08:23.915 4965.612 - 4990.818: 0.0502% ( 5) 00:08:23.915 4990.818 - 5016.025: 0.0816% ( 5) 00:08:23.915 5016.025 - 5041.231: 0.0941% ( 2) 00:08:23.915 5041.231 - 5066.437: 0.1004% ( 1) 00:08:23.915 5066.437 - 5091.643: 0.1192% ( 3) 00:08:23.915 5091.643 - 5116.849: 0.1255% ( 1) 00:08:23.915 5116.849 - 5142.055: 0.1443% ( 3) 00:08:23.915 5142.055 - 5167.262: 0.1506% ( 1) 00:08:23.915 5167.262 - 5192.468: 0.1632% ( 2) 00:08:23.915 5192.468 - 5217.674: 0.1757% ( 2) 00:08:23.915 5217.674 - 5242.880: 0.1945% ( 3) 00:08:23.915 5242.880 - 5268.086: 0.2071% ( 2) 00:08:23.915 5268.086 - 5293.292: 0.2134% ( 1) 00:08:23.915 5293.292 - 5318.498: 0.2322% ( 3) 00:08:23.915 5318.498 - 5343.705: 0.2447% ( 2) 00:08:23.915 5343.705 - 5368.911: 0.2636% ( 3) 00:08:23.915 5368.911 - 5394.117: 0.2761% ( 2) 00:08:23.915 5394.117 - 5419.323: 0.2887% ( 2) 00:08:23.915 5419.323 - 5444.529: 0.3075% ( 3) 00:08:23.915 5444.529 - 5469.735: 0.3200% ( 2) 00:08:23.915 5469.735 - 5494.942: 0.3326% ( 2) 00:08:23.915 5494.942 - 5520.148: 0.3514% ( 3) 00:08:23.915 5520.148 - 5545.354: 0.3640% ( 2) 00:08:23.915 5545.354 - 5570.560: 0.3765% ( 2) 00:08:23.915 5570.560 - 5595.766: 0.3953% ( 3) 00:08:23.915 5595.766 - 5620.972: 0.4016% ( 1) 00:08:23.915 6755.249 - 6805.662: 0.4142% ( 2) 00:08:23.915 6805.662 - 6856.074: 0.4706% ( 9) 00:08:23.915 6856.074 - 6906.486: 0.6150% ( 23) 00:08:23.915 6906.486 - 6956.898: 0.8471% ( 37) 00:08:23.915 6956.898 - 7007.311: 1.2487% ( 64) 00:08:23.915 7007.311 - 7057.723: 1.8449% ( 95) 00:08:23.915 7057.723 - 7108.135: 2.8552% ( 161) 00:08:23.915 7108.135 - 7158.548: 4.4239% ( 250) 00:08:23.915 7158.548 - 7208.960: 6.5073% ( 332) 00:08:23.915 7208.960 - 7259.372: 9.4629% ( 471) 00:08:23.916 7259.372 - 7309.785: 13.2028% ( 596) 00:08:23.916 7309.785 - 7360.197: 17.3381% ( 659) 00:08:23.916 7360.197 - 7410.609: 22.8163% ( 873) 00:08:23.916 7410.609 - 7461.022: 29.0914% ( 1000) 00:08:23.916 7461.022 - 7511.434: 34.8770% ( 922) 00:08:23.916 7511.434 - 7561.846: 40.5309% ( 901) 00:08:23.916 7561.846 - 7612.258: 45.8145% ( 842) 00:08:23.916 7612.258 - 7662.671: 51.4370% ( 896) 00:08:23.916 7662.671 - 7713.083: 56.7771% ( 851) 00:08:23.916 7713.083 - 7763.495: 61.2638% ( 715) 00:08:23.916 7763.495 - 7813.908: 65.3740% ( 655) 00:08:23.916 7813.908 - 7864.320: 69.0261% ( 582) 00:08:23.916 7864.320 - 7914.732: 71.7307% ( 431) 00:08:23.916 7914.732 - 7965.145: 74.1340% ( 383) 00:08:23.916 7965.145 - 8015.557: 76.2801% ( 342) 00:08:23.916 8015.557 - 8065.969: 77.9681% ( 269) 00:08:23.916 8065.969 - 8116.382: 79.5243% ( 248) 00:08:23.916 8116.382 - 8166.794: 81.1182% ( 254) 00:08:23.916 8166.794 - 8217.206: 82.3356% ( 194) 00:08:23.916 8217.206 - 8267.618: 82.9506% ( 98) 00:08:23.916 8267.618 - 8318.031: 83.5090% ( 89) 00:08:23.916 8318.031 - 8368.443: 84.0173% ( 81) 00:08:23.916 8368.443 - 8418.855: 84.5068% ( 78) 00:08:23.916 8418.855 - 8469.268: 84.9774% ( 75) 00:08:23.916 8469.268 - 8519.680: 85.4355% ( 73) 00:08:23.916 8519.680 - 8570.092: 85.8873% ( 72) 00:08:23.916 8570.092 - 8620.505: 86.4834% ( 95) 00:08:23.916 8620.505 - 8670.917: 86.8850% ( 64) 00:08:23.916 8670.917 - 8721.329: 87.2929% ( 65) 00:08:23.916 8721.329 - 8771.742: 87.8200% ( 84) 00:08:23.916 8771.742 - 8822.154: 88.2405% ( 67) 00:08:23.916 8822.154 - 8872.566: 88.7487% ( 81) 00:08:23.916 8872.566 - 8922.978: 89.1315% ( 61) 00:08:23.916 8922.978 - 8973.391: 89.5457% ( 66) 00:08:23.916 8973.391 - 9023.803: 89.9598% ( 66) 00:08:23.916 9023.803 - 9074.215: 90.3489% ( 62) 00:08:23.916 9074.215 - 9124.628: 90.9074% ( 89) 00:08:23.916 9124.628 - 9175.040: 91.2462% ( 54) 00:08:23.916 9175.040 - 9225.452: 91.5851% ( 54) 00:08:23.916 9225.452 - 9275.865: 91.8424% ( 41) 00:08:23.916 9275.865 - 9326.277: 92.1749% ( 53) 00:08:23.916 9326.277 - 9376.689: 92.5013% ( 52) 00:08:23.916 9376.689 - 9427.102: 92.8087% ( 49) 00:08:23.916 9427.102 - 9477.514: 93.3170% ( 81) 00:08:23.916 9477.514 - 9527.926: 93.5555% ( 38) 00:08:23.916 9527.926 - 9578.338: 93.8253% ( 43) 00:08:23.916 9578.338 - 9628.751: 94.2081% ( 61) 00:08:23.916 9628.751 - 9679.163: 94.3963% ( 30) 00:08:23.916 9679.163 - 9729.575: 94.5720% ( 28) 00:08:23.916 9729.575 - 9779.988: 94.7415% ( 27) 00:08:23.916 9779.988 - 9830.400: 94.8795% ( 22) 00:08:23.916 9830.400 - 9880.812: 95.0050% ( 20) 00:08:23.916 9880.812 - 9931.225: 95.0866% ( 13) 00:08:23.916 9931.225 - 9981.637: 95.1619% ( 12) 00:08:23.916 9981.637 - 10032.049: 95.2121% ( 8) 00:08:23.916 10032.049 - 10082.462: 95.2686% ( 9) 00:08:23.916 10082.462 - 10132.874: 95.3125% ( 7) 00:08:23.916 10132.874 - 10183.286: 95.3690% ( 9) 00:08:23.916 10183.286 - 10233.698: 95.4255% ( 9) 00:08:23.916 10233.698 - 10284.111: 95.4882% ( 10) 00:08:23.916 10284.111 - 10334.523: 95.5698% ( 13) 00:08:23.916 10334.523 - 10384.935: 95.6263% ( 9) 00:08:23.916 10384.935 - 10435.348: 95.7769% ( 24) 00:08:23.916 10435.348 - 10485.760: 95.8333% ( 9) 00:08:23.916 10485.760 - 10536.172: 95.8961% ( 10) 00:08:23.916 10536.172 - 10586.585: 95.9463% ( 8) 00:08:23.916 10586.585 - 10636.997: 95.9902% ( 7) 00:08:23.916 10636.997 - 10687.409: 96.0279% ( 6) 00:08:23.916 10687.409 - 10737.822: 96.1785% ( 24) 00:08:23.916 10737.822 - 10788.234: 96.2224% ( 7) 00:08:23.916 10788.234 - 10838.646: 96.2663% ( 7) 00:08:23.916 10838.646 - 10889.058: 96.3228% ( 9) 00:08:23.916 10889.058 - 10939.471: 96.4044% ( 13) 00:08:23.916 10939.471 - 10989.883: 96.4734% ( 11) 00:08:23.916 10989.883 - 11040.295: 96.5110% ( 6) 00:08:23.916 11040.295 - 11090.708: 96.5675% ( 9) 00:08:23.916 11090.708 - 11141.120: 96.6114% ( 7) 00:08:23.916 11141.120 - 11191.532: 96.6805% ( 11) 00:08:23.916 11191.532 - 11241.945: 96.7683% ( 14) 00:08:23.916 11241.945 - 11292.357: 96.8060% ( 6) 00:08:23.916 11292.357 - 11342.769: 96.8373% ( 5) 00:08:23.916 11342.769 - 11393.182: 96.8687% ( 5) 00:08:23.916 11393.182 - 11443.594: 96.8938% ( 4) 00:08:23.916 11443.594 - 11494.006: 96.9252% ( 5) 00:08:23.916 11494.006 - 11544.418: 96.9880% ( 10) 00:08:23.916 11544.418 - 11594.831: 97.0695% ( 13) 00:08:23.916 11594.831 - 11645.243: 97.1135% ( 7) 00:08:23.916 11645.243 - 11695.655: 97.1448% ( 5) 00:08:23.916 11695.655 - 11746.068: 97.1888% ( 7) 00:08:23.916 11746.068 - 11796.480: 97.2201% ( 5) 00:08:23.916 11796.480 - 11846.892: 97.2515% ( 5) 00:08:23.916 11846.892 - 11897.305: 97.2766% ( 4) 00:08:23.916 11897.305 - 11947.717: 97.3017% ( 4) 00:08:23.916 11947.717 - 11998.129: 97.3268% ( 4) 00:08:23.916 11998.129 - 12048.542: 97.3582% ( 5) 00:08:23.916 12048.542 - 12098.954: 97.3896% ( 5) 00:08:23.916 12098.954 - 12149.366: 97.4209% ( 5) 00:08:23.916 12149.366 - 12199.778: 97.4523% ( 5) 00:08:23.916 12199.778 - 12250.191: 97.4837% ( 5) 00:08:23.916 12250.191 - 12300.603: 97.5276% ( 7) 00:08:23.916 12300.603 - 12351.015: 97.5590% ( 5) 00:08:23.916 12351.015 - 12401.428: 97.5841% ( 4) 00:08:23.916 12401.428 - 12451.840: 97.6280% ( 7) 00:08:23.916 12451.840 - 12502.252: 97.6719% ( 7) 00:08:23.916 12502.252 - 12552.665: 97.7096% ( 6) 00:08:23.916 12552.665 - 12603.077: 97.7535% ( 7) 00:08:23.916 12603.077 - 12653.489: 97.7912% ( 6) 00:08:23.916 12653.489 - 12703.902: 97.8225% ( 5) 00:08:23.916 12703.902 - 12754.314: 97.8790% ( 9) 00:08:23.916 12754.314 - 12804.726: 97.9480% ( 11) 00:08:23.916 12804.726 - 12855.138: 98.0045% ( 9) 00:08:23.916 12855.138 - 12905.551: 98.1739% ( 27) 00:08:23.916 12905.551 - 13006.375: 98.2555% ( 13) 00:08:23.916 13006.375 - 13107.200: 98.3496% ( 15) 00:08:23.916 13107.200 - 13208.025: 98.4249% ( 12) 00:08:23.916 13208.025 - 13308.849: 98.4752% ( 8) 00:08:23.916 13308.849 - 13409.674: 98.5316% ( 9) 00:08:23.916 13409.674 - 13510.498: 98.5944% ( 10) 00:08:23.916 13510.498 - 13611.323: 98.7136% ( 19) 00:08:23.916 13611.323 - 13712.148: 98.8077% ( 15) 00:08:23.916 13712.148 - 13812.972: 98.8768% ( 11) 00:08:23.916 13812.972 - 13913.797: 98.9332% ( 9) 00:08:23.916 13913.797 - 14014.622: 98.9709% ( 6) 00:08:23.916 14014.622 - 14115.446: 99.0274% ( 9) 00:08:23.916 14115.446 - 14216.271: 99.1403% ( 18) 00:08:23.916 14216.271 - 14317.095: 99.1842% ( 7) 00:08:23.916 14317.095 - 14417.920: 99.1968% ( 2) 00:08:23.916 17442.658 - 17543.483: 99.2219% ( 4) 00:08:23.916 17543.483 - 17644.308: 99.2533% ( 5) 00:08:23.916 17644.308 - 17745.132: 99.2846% ( 5) 00:08:23.916 17745.132 - 17845.957: 99.3223% ( 6) 00:08:23.916 17845.957 - 17946.782: 99.3537% ( 5) 00:08:23.916 17946.782 - 18047.606: 99.3850% ( 5) 00:08:23.916 18047.606 - 18148.431: 99.4164% ( 5) 00:08:23.916 18148.431 - 18249.255: 99.4478% ( 5) 00:08:23.916 18249.255 - 18350.080: 99.4792% ( 5) 00:08:23.916 18350.080 - 18450.905: 99.5043% ( 4) 00:08:23.916 18450.905 - 18551.729: 99.5356% ( 5) 00:08:23.916 18551.729 - 18652.554: 99.5670% ( 5) 00:08:23.916 18652.554 - 18753.378: 99.5984% ( 5) 00:08:23.916 23290.486 - 23391.311: 99.6109% ( 2) 00:08:23.916 23391.311 - 23492.135: 99.6423% ( 5) 00:08:23.916 23492.135 - 23592.960: 99.6674% ( 4) 00:08:23.916 23592.960 - 23693.785: 99.6988% ( 5) 00:08:23.916 23693.785 - 23794.609: 99.7302% ( 5) 00:08:23.916 23794.609 - 23895.434: 99.7615% ( 5) 00:08:23.916 23895.434 - 23996.258: 99.7929% ( 5) 00:08:23.916 23996.258 - 24097.083: 99.8243% ( 5) 00:08:23.916 24097.083 - 24197.908: 99.8557% ( 5) 00:08:23.916 24197.908 - 24298.732: 99.8870% ( 5) 00:08:23.916 24298.732 - 24399.557: 99.9184% ( 5) 00:08:23.916 24399.557 - 24500.382: 99.9498% ( 5) 00:08:23.916 24500.382 - 24601.206: 99.9749% ( 4) 00:08:23.916 24601.206 - 24702.031: 100.0000% ( 4) 00:08:23.916 00:08:23.916 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.916 ============================================================================== 00:08:23.916 Range in us Cumulative IO count 00:08:23.916 4335.458 - 4360.665: 0.0251% ( 4) 00:08:23.916 4360.665 - 4385.871: 0.0628% ( 6) 00:08:23.916 4385.871 - 4411.077: 0.1067% ( 7) 00:08:23.916 4411.077 - 4436.283: 0.1883% ( 13) 00:08:23.916 4436.283 - 4461.489: 0.2322% ( 7) 00:08:23.916 4461.489 - 4486.695: 0.2447% ( 2) 00:08:23.916 4486.695 - 4511.902: 0.2573% ( 2) 00:08:23.916 4511.902 - 4537.108: 0.2761% ( 3) 00:08:23.916 4537.108 - 4562.314: 0.2887% ( 2) 00:08:23.916 4562.314 - 4587.520: 0.3012% ( 2) 00:08:23.916 4587.520 - 4612.726: 0.3138% ( 2) 00:08:23.916 4612.726 - 4637.932: 0.3263% ( 2) 00:08:23.916 4637.932 - 4663.138: 0.3389% ( 2) 00:08:23.916 4663.138 - 4688.345: 0.3577% ( 3) 00:08:23.916 4688.345 - 4713.551: 0.3702% ( 2) 00:08:23.916 4713.551 - 4738.757: 0.3828% ( 2) 00:08:23.916 4738.757 - 4763.963: 0.3953% ( 2) 00:08:23.917 4763.963 - 4789.169: 0.4016% ( 1) 00:08:23.917 6402.363 - 6427.569: 0.4079% ( 1) 00:08:23.917 6427.569 - 6452.775: 0.4142% ( 1) 00:08:23.917 6503.188 - 6553.600: 0.4581% ( 7) 00:08:23.917 6553.600 - 6604.012: 0.5208% ( 10) 00:08:23.917 6604.012 - 6654.425: 0.6150% ( 15) 00:08:23.917 6654.425 - 6704.837: 0.6526% ( 6) 00:08:23.917 6704.837 - 6755.249: 0.6777% ( 4) 00:08:23.917 6755.249 - 6805.662: 0.7216% ( 7) 00:08:23.917 6805.662 - 6856.074: 0.7844% ( 10) 00:08:23.917 6856.074 - 6906.486: 0.9852% ( 32) 00:08:23.917 6906.486 - 6956.898: 1.3303% ( 55) 00:08:23.917 6956.898 - 7007.311: 1.9390% ( 97) 00:08:23.917 7007.311 - 7057.723: 2.8050% ( 138) 00:08:23.917 7057.723 - 7108.135: 3.9659% ( 185) 00:08:23.917 7108.135 - 7158.548: 5.9488% ( 316) 00:08:23.917 7158.548 - 7208.960: 8.0823% ( 340) 00:08:23.917 7208.960 - 7259.372: 10.8622% ( 443) 00:08:23.917 7259.372 - 7309.785: 14.4265% ( 568) 00:08:23.917 7309.785 - 7360.197: 18.7939% ( 696) 00:08:23.917 7360.197 - 7410.609: 23.3998% ( 734) 00:08:23.917 7410.609 - 7461.022: 28.9282% ( 881) 00:08:23.917 7461.022 - 7511.434: 35.2849% ( 1013) 00:08:23.917 7511.434 - 7561.846: 41.6918% ( 1021) 00:08:23.917 7561.846 - 7612.258: 47.0507% ( 854) 00:08:23.917 7612.258 - 7662.671: 52.1273% ( 809) 00:08:23.917 7662.671 - 7713.083: 57.0783% ( 789) 00:08:23.917 7713.083 - 7763.495: 61.7595% ( 746) 00:08:23.917 7763.495 - 7813.908: 65.5246% ( 600) 00:08:23.917 7813.908 - 7864.320: 68.7563% ( 515) 00:08:23.917 7864.320 - 7914.732: 71.5173% ( 440) 00:08:23.917 7914.732 - 7965.145: 73.9583% ( 389) 00:08:23.917 7965.145 - 8015.557: 75.8409% ( 300) 00:08:23.917 8015.557 - 8065.969: 77.4661% ( 259) 00:08:23.917 8065.969 - 8116.382: 79.0349% ( 250) 00:08:23.917 8116.382 - 8166.794: 80.1832% ( 183) 00:08:23.917 8166.794 - 8217.206: 81.1308% ( 151) 00:08:23.917 8217.206 - 8267.618: 82.0909% ( 153) 00:08:23.917 8267.618 - 8318.031: 82.8313% ( 118) 00:08:23.917 8318.031 - 8368.443: 83.5028% ( 107) 00:08:23.917 8368.443 - 8418.855: 84.3311% ( 132) 00:08:23.917 8418.855 - 8469.268: 84.9962% ( 106) 00:08:23.917 8469.268 - 8519.680: 85.5359% ( 86) 00:08:23.917 8519.680 - 8570.092: 86.1258% ( 94) 00:08:23.917 8570.092 - 8620.505: 86.8097% ( 109) 00:08:23.917 8620.505 - 8670.917: 87.2490% ( 70) 00:08:23.917 8670.917 - 8721.329: 87.7447% ( 79) 00:08:23.917 8721.329 - 8771.742: 88.2216% ( 76) 00:08:23.917 8771.742 - 8822.154: 88.6483% ( 68) 00:08:23.917 8822.154 - 8872.566: 89.0437% ( 63) 00:08:23.917 8872.566 - 8922.978: 89.4516% ( 65) 00:08:23.917 8922.978 - 8973.391: 89.9661% ( 82) 00:08:23.917 8973.391 - 9023.803: 90.3991% ( 69) 00:08:23.917 9023.803 - 9074.215: 90.7944% ( 63) 00:08:23.917 9074.215 - 9124.628: 91.1207% ( 52) 00:08:23.917 9124.628 - 9175.040: 91.5914% ( 75) 00:08:23.917 9175.040 - 9225.452: 91.9302% ( 54) 00:08:23.917 9225.452 - 9275.865: 92.1498% ( 35) 00:08:23.917 9275.865 - 9326.277: 92.3883% ( 38) 00:08:23.917 9326.277 - 9376.689: 92.6142% ( 36) 00:08:23.917 9376.689 - 9427.102: 92.8338% ( 35) 00:08:23.917 9427.102 - 9477.514: 93.1288% ( 47) 00:08:23.917 9477.514 - 9527.926: 93.3107% ( 29) 00:08:23.917 9527.926 - 9578.338: 93.4676% ( 25) 00:08:23.917 9578.338 - 9628.751: 93.7374% ( 43) 00:08:23.917 9628.751 - 9679.163: 93.9759% ( 38) 00:08:23.917 9679.163 - 9729.575: 94.1767% ( 32) 00:08:23.917 9729.575 - 9779.988: 94.2583% ( 13) 00:08:23.917 9779.988 - 9830.400: 94.3461% ( 14) 00:08:23.917 9830.400 - 9880.812: 94.4340% ( 14) 00:08:23.917 9880.812 - 9931.225: 94.5595% ( 20) 00:08:23.917 9931.225 - 9981.637: 94.6536% ( 15) 00:08:23.917 9981.637 - 10032.049: 94.7666% ( 18) 00:08:23.917 10032.049 - 10082.462: 94.9360% ( 27) 00:08:23.917 10082.462 - 10132.874: 95.0866% ( 24) 00:08:23.917 10132.874 - 10183.286: 95.1682% ( 13) 00:08:23.917 10183.286 - 10233.698: 95.2372% ( 11) 00:08:23.917 10233.698 - 10284.111: 95.2937% ( 9) 00:08:23.917 10284.111 - 10334.523: 95.3690% ( 12) 00:08:23.917 10334.523 - 10384.935: 95.4694% ( 16) 00:08:23.917 10384.935 - 10435.348: 95.6639% ( 31) 00:08:23.917 10435.348 - 10485.760: 95.7831% ( 19) 00:08:23.917 10485.760 - 10536.172: 95.8898% ( 17) 00:08:23.917 10536.172 - 10586.585: 96.0530% ( 26) 00:08:23.917 10586.585 - 10636.997: 96.2475% ( 31) 00:08:23.917 10636.997 - 10687.409: 96.3353% ( 14) 00:08:23.917 10687.409 - 10737.822: 96.4044% ( 11) 00:08:23.917 10737.822 - 10788.234: 96.4483% ( 7) 00:08:23.917 10788.234 - 10838.646: 96.4922% ( 7) 00:08:23.917 10838.646 - 10889.058: 96.5173% ( 4) 00:08:23.917 10889.058 - 10939.471: 96.5424% ( 4) 00:08:23.917 10939.471 - 10989.883: 96.5675% ( 4) 00:08:23.917 10989.883 - 11040.295: 96.5926% ( 4) 00:08:23.917 11040.295 - 11090.708: 96.6052% ( 2) 00:08:23.917 11090.708 - 11141.120: 96.6240% ( 3) 00:08:23.917 11141.120 - 11191.532: 96.6491% ( 4) 00:08:23.917 11191.532 - 11241.945: 96.7118% ( 10) 00:08:23.917 11241.945 - 11292.357: 96.8060% ( 15) 00:08:23.917 11292.357 - 11342.769: 96.8938% ( 14) 00:08:23.917 11342.769 - 11393.182: 96.9503% ( 9) 00:08:23.917 11393.182 - 11443.594: 96.9880% ( 6) 00:08:23.917 11443.594 - 11494.006: 97.0256% ( 6) 00:08:23.917 11494.006 - 11544.418: 97.0758% ( 8) 00:08:23.917 11544.418 - 11594.831: 97.1072% ( 5) 00:08:23.917 11594.831 - 11645.243: 97.1574% ( 8) 00:08:23.917 11645.243 - 11695.655: 97.2013% ( 7) 00:08:23.917 11695.655 - 11746.068: 97.2327% ( 5) 00:08:23.917 11746.068 - 11796.480: 97.2578% ( 4) 00:08:23.917 11796.480 - 11846.892: 97.2892% ( 5) 00:08:23.917 11846.892 - 11897.305: 97.3143% ( 4) 00:08:23.917 11897.305 - 11947.717: 97.3456% ( 5) 00:08:23.917 11947.717 - 11998.129: 97.3707% ( 4) 00:08:23.917 11998.129 - 12048.542: 97.3958% ( 4) 00:08:23.917 12048.542 - 12098.954: 97.4272% ( 5) 00:08:23.917 12098.954 - 12149.366: 97.4586% ( 5) 00:08:23.917 12149.366 - 12199.778: 97.4837% ( 4) 00:08:23.917 12199.778 - 12250.191: 97.5213% ( 6) 00:08:23.917 12250.191 - 12300.603: 97.5527% ( 5) 00:08:23.917 12300.603 - 12351.015: 97.5778% ( 4) 00:08:23.917 12351.015 - 12401.428: 97.5904% ( 2) 00:08:23.917 12451.840 - 12502.252: 97.5966% ( 1) 00:08:23.917 12502.252 - 12552.665: 97.6406% ( 7) 00:08:23.917 12552.665 - 12603.077: 97.6782% ( 6) 00:08:23.917 12603.077 - 12653.489: 97.7159% ( 6) 00:08:23.917 12653.489 - 12703.902: 97.7347% ( 3) 00:08:23.917 12703.902 - 12754.314: 97.7598% ( 4) 00:08:23.917 12754.314 - 12804.726: 97.7723% ( 2) 00:08:23.917 12804.726 - 12855.138: 97.8037% ( 5) 00:08:23.917 12855.138 - 12905.551: 97.8539% ( 8) 00:08:23.917 12905.551 - 13006.375: 97.9606% ( 17) 00:08:23.917 13006.375 - 13107.200: 98.0422% ( 13) 00:08:23.917 13107.200 - 13208.025: 98.1237% ( 13) 00:08:23.917 13208.025 - 13308.849: 98.2053% ( 13) 00:08:23.917 13308.849 - 13409.674: 98.2806% ( 12) 00:08:23.917 13409.674 - 13510.498: 98.3936% ( 18) 00:08:23.917 13510.498 - 13611.323: 98.5191% ( 20) 00:08:23.917 13611.323 - 13712.148: 98.6320% ( 18) 00:08:23.917 13712.148 - 13812.972: 98.7262% ( 15) 00:08:23.917 13812.972 - 13913.797: 98.7764% ( 8) 00:08:23.917 13913.797 - 14014.622: 98.8705% ( 15) 00:08:23.917 14014.622 - 14115.446: 99.0399% ( 27) 00:08:23.917 14115.446 - 14216.271: 99.1089% ( 11) 00:08:23.917 14216.271 - 14317.095: 99.1717% ( 10) 00:08:23.917 14317.095 - 14417.920: 99.1968% ( 4) 00:08:23.917 17845.957 - 17946.782: 99.2282% ( 5) 00:08:23.917 17946.782 - 18047.606: 99.2658% ( 6) 00:08:23.917 18047.606 - 18148.431: 99.2972% ( 5) 00:08:23.917 18148.431 - 18249.255: 99.3286% ( 5) 00:08:23.917 18249.255 - 18350.080: 99.3662% ( 6) 00:08:23.917 18350.080 - 18450.905: 99.4039% ( 6) 00:08:23.917 18450.905 - 18551.729: 99.4415% ( 6) 00:08:23.917 18551.729 - 18652.554: 99.4666% ( 4) 00:08:23.917 18652.554 - 18753.378: 99.4854% ( 3) 00:08:23.917 18753.378 - 18854.203: 99.5168% ( 5) 00:08:23.917 18854.203 - 18955.028: 99.5482% ( 5) 00:08:23.917 18955.028 - 19055.852: 99.5733% ( 4) 00:08:23.917 19055.852 - 19156.677: 99.5984% ( 4) 00:08:23.917 22887.188 - 22988.012: 99.6109% ( 2) 00:08:23.917 22988.012 - 23088.837: 99.6235% ( 2) 00:08:23.917 23088.837 - 23189.662: 99.6360% ( 2) 00:08:23.917 23189.662 - 23290.486: 99.6674% ( 5) 00:08:23.917 23290.486 - 23391.311: 99.7239% ( 9) 00:08:23.917 23492.135 - 23592.960: 99.7553% ( 5) 00:08:23.917 23592.960 - 23693.785: 99.7992% ( 7) 00:08:23.917 23693.785 - 23794.609: 99.8431% ( 7) 00:08:23.917 23794.609 - 23895.434: 99.8745% ( 5) 00:08:23.917 23895.434 - 23996.258: 99.9121% ( 6) 00:08:23.917 23996.258 - 24097.083: 99.9498% ( 6) 00:08:23.917 24097.083 - 24197.908: 99.9812% ( 5) 00:08:23.917 24197.908 - 24298.732: 100.0000% ( 3) 00:08:23.917 00:08:23.917 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.917 ============================================================================== 00:08:23.917 Range in us Cumulative IO count 00:08:23.917 3932.160 - 3957.366: 0.0251% ( 4) 00:08:23.917 3957.366 - 3982.572: 0.0502% ( 4) 00:08:23.917 3982.572 - 4007.778: 0.0816% ( 5) 00:08:23.917 4007.778 - 4032.985: 0.1506% ( 11) 00:08:23.917 4032.985 - 4058.191: 0.2259% ( 12) 00:08:23.918 4058.191 - 4083.397: 0.2385% ( 2) 00:08:23.918 4083.397 - 4108.603: 0.2510% ( 2) 00:08:23.918 4108.603 - 4133.809: 0.2636% ( 2) 00:08:23.918 4133.809 - 4159.015: 0.2761% ( 2) 00:08:23.918 4159.015 - 4184.222: 0.2887% ( 2) 00:08:23.918 4184.222 - 4209.428: 0.3012% ( 2) 00:08:23.918 4209.428 - 4234.634: 0.3138% ( 2) 00:08:23.918 4234.634 - 4259.840: 0.3263% ( 2) 00:08:23.918 4259.840 - 4285.046: 0.3389% ( 2) 00:08:23.918 4285.046 - 4310.252: 0.3514% ( 2) 00:08:23.918 4310.252 - 4335.458: 0.3640% ( 2) 00:08:23.918 4335.458 - 4360.665: 0.3765% ( 2) 00:08:23.918 4360.665 - 4385.871: 0.3891% ( 2) 00:08:23.918 4385.871 - 4411.077: 0.4016% ( 2) 00:08:23.918 6175.508 - 6200.714: 0.4204% ( 3) 00:08:23.918 6200.714 - 6225.920: 0.4393% ( 3) 00:08:23.918 6225.920 - 6251.126: 0.4644% ( 4) 00:08:23.918 6251.126 - 6276.332: 0.4706% ( 1) 00:08:23.918 6276.332 - 6301.538: 0.5208% ( 8) 00:08:23.918 6301.538 - 6326.745: 0.5710% ( 8) 00:08:23.918 6326.745 - 6351.951: 0.6087% ( 6) 00:08:23.918 6351.951 - 6377.157: 0.6275% ( 3) 00:08:23.918 6377.157 - 6402.363: 0.6401% ( 2) 00:08:23.918 6402.363 - 6427.569: 0.6526% ( 2) 00:08:23.918 6427.569 - 6452.775: 0.6714% ( 3) 00:08:23.918 6452.775 - 6503.188: 0.6965% ( 4) 00:08:23.918 6503.188 - 6553.600: 0.7216% ( 4) 00:08:23.918 6553.600 - 6604.012: 0.7530% ( 5) 00:08:23.918 6604.012 - 6654.425: 0.7781% ( 4) 00:08:23.918 6654.425 - 6704.837: 0.8032% ( 4) 00:08:23.918 6755.249 - 6805.662: 0.8095% ( 1) 00:08:23.918 6805.662 - 6856.074: 0.8283% ( 3) 00:08:23.918 6856.074 - 6906.486: 1.0166% ( 30) 00:08:23.918 6906.486 - 6956.898: 1.3366% ( 51) 00:08:23.918 6956.898 - 7007.311: 1.7570% ( 67) 00:08:23.918 7007.311 - 7057.723: 2.4285% ( 107) 00:08:23.918 7057.723 - 7108.135: 3.5392% ( 177) 00:08:23.918 7108.135 - 7158.548: 5.0515% ( 241) 00:08:23.918 7158.548 - 7208.960: 7.0156% ( 313) 00:08:23.918 7208.960 - 7259.372: 9.7013% ( 428) 00:08:23.918 7259.372 - 7309.785: 13.4036% ( 590) 00:08:23.918 7309.785 - 7360.197: 17.9531% ( 725) 00:08:23.918 7360.197 - 7410.609: 22.8727% ( 784) 00:08:23.918 7410.609 - 7461.022: 28.6458% ( 920) 00:08:23.918 7461.022 - 7511.434: 34.9711% ( 1008) 00:08:23.918 7511.434 - 7561.846: 41.1960% ( 992) 00:08:23.918 7561.846 - 7612.258: 46.7056% ( 878) 00:08:23.918 7612.258 - 7662.671: 51.9892% ( 842) 00:08:23.918 7662.671 - 7713.083: 56.5638% ( 729) 00:08:23.918 7713.083 - 7763.495: 61.0505% ( 715) 00:08:23.918 7763.495 - 7813.908: 64.8218% ( 601) 00:08:23.918 7813.908 - 7864.320: 68.1099% ( 524) 00:08:23.918 7864.320 - 7914.732: 71.0467% ( 468) 00:08:23.918 7914.732 - 7965.145: 73.8266% ( 443) 00:08:23.918 7965.145 - 8015.557: 75.9350% ( 336) 00:08:23.918 8015.557 - 8065.969: 77.7610% ( 291) 00:08:23.918 8065.969 - 8116.382: 79.2859% ( 243) 00:08:23.918 8116.382 - 8166.794: 80.4468% ( 185) 00:08:23.918 8166.794 - 8217.206: 81.7332% ( 205) 00:08:23.918 8217.206 - 8267.618: 82.7560% ( 163) 00:08:23.918 8267.618 - 8318.031: 83.6659% ( 145) 00:08:23.918 8318.031 - 8368.443: 84.4189% ( 120) 00:08:23.918 8368.443 - 8418.855: 85.1029% ( 109) 00:08:23.918 8418.855 - 8469.268: 85.7681% ( 106) 00:08:23.918 8469.268 - 8519.680: 86.1885% ( 67) 00:08:23.918 8519.680 - 8570.092: 86.7031% ( 82) 00:08:23.918 8570.092 - 8620.505: 87.2113% ( 81) 00:08:23.918 8620.505 - 8670.917: 87.7196% ( 81) 00:08:23.918 8670.917 - 8721.329: 88.3095% ( 94) 00:08:23.918 8721.329 - 8771.742: 88.8366% ( 84) 00:08:23.918 8771.742 - 8822.154: 89.2445% ( 65) 00:08:23.918 8822.154 - 8872.566: 89.7465% ( 80) 00:08:23.918 8872.566 - 8922.978: 90.2422% ( 79) 00:08:23.918 8922.978 - 8973.391: 90.5685% ( 52) 00:08:23.918 8973.391 - 9023.803: 90.8886% ( 51) 00:08:23.918 9023.803 - 9074.215: 91.0894% ( 32) 00:08:23.918 9074.215 - 9124.628: 91.3153% ( 36) 00:08:23.918 9124.628 - 9175.040: 91.5725% ( 41) 00:08:23.918 9175.040 - 9225.452: 91.8235% ( 40) 00:08:23.918 9225.452 - 9275.865: 92.0369% ( 34) 00:08:23.918 9275.865 - 9326.277: 92.2440% ( 33) 00:08:23.918 9326.277 - 9376.689: 92.6707% ( 68) 00:08:23.918 9376.689 - 9427.102: 92.9656% ( 47) 00:08:23.918 9427.102 - 9477.514: 93.2543% ( 46) 00:08:23.918 9477.514 - 9527.926: 93.3860% ( 21) 00:08:23.918 9527.926 - 9578.338: 93.5115% ( 20) 00:08:23.918 9578.338 - 9628.751: 93.6308% ( 19) 00:08:23.918 9628.751 - 9679.163: 93.7500% ( 19) 00:08:23.918 9679.163 - 9729.575: 93.8567% ( 17) 00:08:23.918 9729.575 - 9779.988: 93.9822% ( 20) 00:08:23.918 9779.988 - 9830.400: 94.1767% ( 31) 00:08:23.918 9830.400 - 9880.812: 94.3650% ( 30) 00:08:23.918 9880.812 - 9931.225: 94.4905% ( 20) 00:08:23.918 9931.225 - 9981.637: 94.5532% ( 10) 00:08:23.918 9981.637 - 10032.049: 94.6411% ( 14) 00:08:23.918 10032.049 - 10082.462: 94.7289% ( 14) 00:08:23.918 10082.462 - 10132.874: 94.8607% ( 21) 00:08:23.918 10132.874 - 10183.286: 94.9674% ( 17) 00:08:23.918 10183.286 - 10233.698: 95.0678% ( 16) 00:08:23.918 10233.698 - 10284.111: 95.2058% ( 22) 00:08:23.918 10284.111 - 10334.523: 95.3376% ( 21) 00:08:23.918 10334.523 - 10384.935: 95.4568% ( 19) 00:08:23.918 10384.935 - 10435.348: 95.5635% ( 17) 00:08:23.918 10435.348 - 10485.760: 95.7706% ( 33) 00:08:23.918 10485.760 - 10536.172: 95.8584% ( 14) 00:08:23.918 10536.172 - 10586.585: 95.9337% ( 12) 00:08:23.918 10586.585 - 10636.997: 96.0090% ( 12) 00:08:23.918 10636.997 - 10687.409: 96.0906% ( 13) 00:08:23.918 10687.409 - 10737.822: 96.1596% ( 11) 00:08:23.918 10737.822 - 10788.234: 96.2977% ( 22) 00:08:23.918 10788.234 - 10838.646: 96.3981% ( 16) 00:08:23.918 10838.646 - 10889.058: 96.5173% ( 19) 00:08:23.918 10889.058 - 10939.471: 96.6052% ( 14) 00:08:23.918 10939.471 - 10989.883: 96.6679% ( 10) 00:08:23.918 10989.883 - 11040.295: 96.7307% ( 10) 00:08:23.918 11040.295 - 11090.708: 96.7871% ( 9) 00:08:23.918 11090.708 - 11141.120: 96.8562% ( 11) 00:08:23.918 11141.120 - 11191.532: 96.9252% ( 11) 00:08:23.918 11191.532 - 11241.945: 96.9629% ( 6) 00:08:23.918 11241.945 - 11292.357: 96.9942% ( 5) 00:08:23.918 11292.357 - 11342.769: 97.0382% ( 7) 00:08:23.918 11342.769 - 11393.182: 97.0695% ( 5) 00:08:23.918 11393.182 - 11443.594: 97.0946% ( 4) 00:08:23.918 11443.594 - 11494.006: 97.1072% ( 2) 00:08:23.918 11494.006 - 11544.418: 97.1197% ( 2) 00:08:23.918 11544.418 - 11594.831: 97.1386% ( 3) 00:08:23.918 11594.831 - 11645.243: 97.1888% ( 8) 00:08:23.918 11645.243 - 11695.655: 97.2578% ( 11) 00:08:23.918 11695.655 - 11746.068: 97.3645% ( 17) 00:08:23.918 11746.068 - 11796.480: 97.4209% ( 9) 00:08:23.918 11796.480 - 11846.892: 97.4523% ( 5) 00:08:23.918 11846.892 - 11897.305: 97.4774% ( 4) 00:08:23.918 11897.305 - 11947.717: 97.5088% ( 5) 00:08:23.918 11947.717 - 11998.129: 97.5464% ( 6) 00:08:23.918 11998.129 - 12048.542: 97.5778% ( 5) 00:08:23.918 12048.542 - 12098.954: 97.6092% ( 5) 00:08:23.918 12098.954 - 12149.366: 97.6468% ( 6) 00:08:23.918 12149.366 - 12199.778: 97.6782% ( 5) 00:08:23.918 12199.778 - 12250.191: 97.7159% ( 6) 00:08:23.918 12250.191 - 12300.603: 97.7472% ( 5) 00:08:23.918 12300.603 - 12351.015: 97.7786% ( 5) 00:08:23.918 12351.015 - 12401.428: 97.8100% ( 5) 00:08:23.918 12401.428 - 12451.840: 97.8288% ( 3) 00:08:23.918 12451.840 - 12502.252: 97.8414% ( 2) 00:08:23.918 12502.252 - 12552.665: 97.8539% ( 2) 00:08:23.918 12552.665 - 12603.077: 97.8602% ( 1) 00:08:23.918 12603.077 - 12653.489: 97.8727% ( 2) 00:08:23.918 12653.489 - 12703.902: 97.8790% ( 1) 00:08:23.918 12703.902 - 12754.314: 97.8916% ( 2) 00:08:23.918 12754.314 - 12804.726: 97.9229% ( 5) 00:08:23.918 12804.726 - 12855.138: 97.9606% ( 6) 00:08:23.918 12855.138 - 12905.551: 98.0045% ( 7) 00:08:23.918 12905.551 - 13006.375: 98.1049% ( 16) 00:08:23.918 13006.375 - 13107.200: 98.1802% ( 12) 00:08:23.918 13107.200 - 13208.025: 98.2618% ( 13) 00:08:23.918 13208.025 - 13308.849: 98.2932% ( 5) 00:08:23.918 13308.849 - 13409.674: 98.3245% ( 5) 00:08:23.918 13409.674 - 13510.498: 98.4877% ( 26) 00:08:23.918 13510.498 - 13611.323: 98.6446% ( 25) 00:08:23.918 13611.323 - 13712.148: 98.6760% ( 5) 00:08:23.918 13712.148 - 13812.972: 98.7073% ( 5) 00:08:23.918 13812.972 - 13913.797: 98.7575% ( 8) 00:08:23.918 13913.797 - 14014.622: 98.8203% ( 10) 00:08:23.918 14014.622 - 14115.446: 98.8579% ( 6) 00:08:23.918 14115.446 - 14216.271: 98.8956% ( 6) 00:08:23.919 14216.271 - 14317.095: 98.9709% ( 12) 00:08:23.919 14317.095 - 14417.920: 99.0399% ( 11) 00:08:23.919 14417.920 - 14518.745: 99.0776% ( 6) 00:08:23.919 14518.745 - 14619.569: 99.1215% ( 7) 00:08:23.919 14619.569 - 14720.394: 99.1591% ( 6) 00:08:23.919 14720.394 - 14821.218: 99.1968% ( 6) 00:08:23.919 17745.132 - 17845.957: 99.2407% ( 7) 00:08:23.919 17845.957 - 17946.782: 99.2909% ( 8) 00:08:23.919 17946.782 - 18047.606: 99.3348% ( 7) 00:08:23.919 18047.606 - 18148.431: 99.3662% ( 5) 00:08:23.919 18148.431 - 18249.255: 99.4039% ( 6) 00:08:23.919 18249.255 - 18350.080: 99.4478% ( 7) 00:08:23.919 18350.080 - 18450.905: 99.4792% ( 5) 00:08:23.919 18450.905 - 18551.729: 99.5105% ( 5) 00:08:23.919 18551.729 - 18652.554: 99.5294% ( 3) 00:08:23.919 18652.554 - 18753.378: 99.5545% ( 4) 00:08:23.919 18753.378 - 18854.203: 99.5796% ( 4) 00:08:23.919 18854.203 - 18955.028: 99.5984% ( 3) 00:08:23.919 22685.538 - 22786.363: 99.6109% ( 2) 00:08:23.919 22786.363 - 22887.188: 99.6235% ( 2) 00:08:23.919 22887.188 - 22988.012: 99.6423% ( 3) 00:08:23.919 22988.012 - 23088.837: 99.6925% ( 8) 00:08:23.919 23088.837 - 23189.662: 99.7929% ( 16) 00:08:23.919 23189.662 - 23290.486: 99.8180% ( 4) 00:08:23.919 23290.486 - 23391.311: 99.8557% ( 6) 00:08:23.919 23391.311 - 23492.135: 99.8808% ( 4) 00:08:23.919 23492.135 - 23592.960: 99.9059% ( 4) 00:08:23.919 23592.960 - 23693.785: 99.9310% ( 4) 00:08:23.919 23693.785 - 23794.609: 99.9561% ( 4) 00:08:23.919 23794.609 - 23895.434: 99.9812% ( 4) 00:08:23.919 23895.434 - 23996.258: 100.0000% ( 3) 00:08:23.919 00:08:23.919 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.919 ============================================================================== 00:08:23.919 Range in us Cumulative IO count 00:08:23.919 3579.274 - 3604.480: 0.0063% ( 1) 00:08:23.919 3604.480 - 3629.686: 0.0126% ( 1) 00:08:23.919 3629.686 - 3654.892: 0.0502% ( 6) 00:08:23.919 3654.892 - 3680.098: 0.0879% ( 6) 00:08:23.919 3680.098 - 3705.305: 0.1443% ( 9) 00:08:23.919 3705.305 - 3730.511: 0.2259% ( 13) 00:08:23.919 3730.511 - 3755.717: 0.2636% ( 6) 00:08:23.919 3755.717 - 3780.923: 0.2761% ( 2) 00:08:23.919 3780.923 - 3806.129: 0.2887% ( 2) 00:08:23.919 3806.129 - 3831.335: 0.3012% ( 2) 00:08:23.919 3831.335 - 3856.542: 0.3138% ( 2) 00:08:23.919 3856.542 - 3881.748: 0.3263% ( 2) 00:08:23.919 3881.748 - 3906.954: 0.3389% ( 2) 00:08:23.919 3906.954 - 3932.160: 0.3514% ( 2) 00:08:23.919 3932.160 - 3957.366: 0.3640% ( 2) 00:08:23.919 3957.366 - 3982.572: 0.3765% ( 2) 00:08:23.919 3982.572 - 4007.778: 0.3953% ( 3) 00:08:23.919 4007.778 - 4032.985: 0.4016% ( 1) 00:08:23.919 5873.034 - 5898.240: 0.4079% ( 1) 00:08:23.919 5999.065 - 6024.271: 0.4142% ( 1) 00:08:23.919 6024.271 - 6049.477: 0.4518% ( 6) 00:08:23.919 6049.477 - 6074.683: 0.4769% ( 4) 00:08:23.919 6074.683 - 6099.889: 0.5083% ( 5) 00:08:23.919 6099.889 - 6125.095: 0.5773% ( 11) 00:08:23.919 6125.095 - 6150.302: 0.6087% ( 5) 00:08:23.919 6150.302 - 6175.508: 0.6275% ( 3) 00:08:23.919 6175.508 - 6200.714: 0.6463% ( 3) 00:08:23.919 6200.714 - 6225.920: 0.6589% ( 2) 00:08:23.919 6225.920 - 6251.126: 0.6714% ( 2) 00:08:23.919 6251.126 - 6276.332: 0.6903% ( 3) 00:08:23.919 6276.332 - 6301.538: 0.7028% ( 2) 00:08:23.919 6301.538 - 6326.745: 0.7154% ( 2) 00:08:23.919 6326.745 - 6351.951: 0.7279% ( 2) 00:08:23.919 6351.951 - 6377.157: 0.7467% ( 3) 00:08:23.919 6377.157 - 6402.363: 0.7593% ( 2) 00:08:23.919 6402.363 - 6427.569: 0.7656% ( 1) 00:08:23.919 6427.569 - 6452.775: 0.7844% ( 3) 00:08:23.919 6452.775 - 6503.188: 0.8032% ( 3) 00:08:23.919 6704.837 - 6755.249: 0.8095% ( 1) 00:08:23.919 6755.249 - 6805.662: 0.8283% ( 3) 00:08:23.919 6805.662 - 6856.074: 0.9036% ( 12) 00:08:23.919 6856.074 - 6906.486: 1.0668% ( 26) 00:08:23.919 6906.486 - 6956.898: 1.3303% ( 42) 00:08:23.919 6956.898 - 7007.311: 1.7445% ( 66) 00:08:23.919 7007.311 - 7057.723: 2.3971% ( 104) 00:08:23.919 7057.723 - 7108.135: 3.3760% ( 156) 00:08:23.919 7108.135 - 7158.548: 4.8946% ( 242) 00:08:23.919 7158.548 - 7208.960: 7.1160% ( 354) 00:08:23.919 7208.960 - 7259.372: 10.0966% ( 475) 00:08:23.919 7259.372 - 7309.785: 13.6421% ( 565) 00:08:23.919 7309.785 - 7360.197: 17.8966% ( 678) 00:08:23.919 7360.197 - 7410.609: 23.3873% ( 875) 00:08:23.919 7410.609 - 7461.022: 29.2796% ( 939) 00:08:23.919 7461.022 - 7511.434: 35.1531% ( 936) 00:08:23.919 7511.434 - 7561.846: 41.0643% ( 942) 00:08:23.919 7561.846 - 7612.258: 46.5801% ( 879) 00:08:23.919 7612.258 - 7662.671: 51.6566% ( 809) 00:08:23.919 7662.671 - 7713.083: 56.0743% ( 704) 00:08:23.919 7713.083 - 7763.495: 60.3916% ( 688) 00:08:23.919 7763.495 - 7813.908: 64.5143% ( 657) 00:08:23.919 7813.908 - 7864.320: 67.8840% ( 537) 00:08:23.919 7864.320 - 7914.732: 70.5133% ( 419) 00:08:23.919 7914.732 - 7965.145: 73.0735% ( 408) 00:08:23.919 7965.145 - 8015.557: 75.4016% ( 371) 00:08:23.919 8015.557 - 8065.969: 77.2151% ( 289) 00:08:23.919 8065.969 - 8116.382: 79.1165% ( 303) 00:08:23.919 8116.382 - 8166.794: 80.5848% ( 234) 00:08:23.919 8166.794 - 8217.206: 81.9716% ( 221) 00:08:23.919 8217.206 - 8267.618: 83.1827% ( 193) 00:08:23.919 8267.618 - 8318.031: 83.9232% ( 118) 00:08:23.919 8318.031 - 8368.443: 84.6323% ( 113) 00:08:23.919 8368.443 - 8418.855: 85.4292% ( 127) 00:08:23.919 8418.855 - 8469.268: 85.9752% ( 87) 00:08:23.919 8469.268 - 8519.680: 86.4772% ( 80) 00:08:23.919 8519.680 - 8570.092: 87.0482% ( 91) 00:08:23.919 8570.092 - 8620.505: 87.5126% ( 74) 00:08:23.919 8620.505 - 8670.917: 87.9832% ( 75) 00:08:23.919 8670.917 - 8721.329: 88.3848% ( 64) 00:08:23.919 8721.329 - 8771.742: 88.7425% ( 57) 00:08:23.919 8771.742 - 8822.154: 89.0060% ( 42) 00:08:23.919 8822.154 - 8872.566: 89.3135% ( 49) 00:08:23.919 8872.566 - 8922.978: 89.7653% ( 72) 00:08:23.919 8922.978 - 8973.391: 90.0728% ( 49) 00:08:23.919 8973.391 - 9023.803: 90.3301% ( 41) 00:08:23.919 9023.803 - 9074.215: 90.7254% ( 63) 00:08:23.919 9074.215 - 9124.628: 91.0894% ( 58) 00:08:23.919 9124.628 - 9175.040: 91.3780% ( 46) 00:08:23.919 9175.040 - 9225.452: 91.7294% ( 56) 00:08:23.919 9225.452 - 9275.865: 92.1059% ( 60) 00:08:23.919 9275.865 - 9326.277: 92.4009% ( 47) 00:08:23.919 9326.277 - 9376.689: 92.6456% ( 39) 00:08:23.919 9376.689 - 9427.102: 92.9280% ( 45) 00:08:23.919 9427.102 - 9477.514: 93.0848% ( 25) 00:08:23.919 9477.514 - 9527.926: 93.3923% ( 49) 00:08:23.919 9527.926 - 9578.338: 93.5868% ( 31) 00:08:23.919 9578.338 - 9628.751: 93.8002% ( 34) 00:08:23.919 9628.751 - 9679.163: 93.9445% ( 23) 00:08:23.919 9679.163 - 9729.575: 94.0575% ( 18) 00:08:23.919 9729.575 - 9779.988: 94.1767% ( 19) 00:08:23.919 9779.988 - 9830.400: 94.2708% ( 15) 00:08:23.919 9830.400 - 9880.812: 94.3399% ( 11) 00:08:23.919 9880.812 - 9931.225: 94.4152% ( 12) 00:08:23.919 9931.225 - 9981.637: 94.4779% ( 10) 00:08:23.919 9981.637 - 10032.049: 94.5595% ( 13) 00:08:23.919 10032.049 - 10082.462: 94.6411% ( 13) 00:08:23.919 10082.462 - 10132.874: 94.7603% ( 19) 00:08:23.919 10132.874 - 10183.286: 94.9109% ( 24) 00:08:23.919 10183.286 - 10233.698: 95.0929% ( 29) 00:08:23.919 10233.698 - 10284.111: 95.3125% ( 35) 00:08:23.919 10284.111 - 10334.523: 95.4317% ( 19) 00:08:23.919 10334.523 - 10384.935: 95.5384% ( 17) 00:08:23.919 10384.935 - 10435.348: 95.7894% ( 40) 00:08:23.919 10435.348 - 10485.760: 95.8898% ( 16) 00:08:23.919 10485.760 - 10536.172: 95.9902% ( 16) 00:08:23.919 10536.172 - 10586.585: 96.0843% ( 15) 00:08:23.919 10586.585 - 10636.997: 96.1722% ( 14) 00:08:23.919 10636.997 - 10687.409: 96.2538% ( 13) 00:08:23.919 10687.409 - 10737.822: 96.3228% ( 11) 00:08:23.919 10737.822 - 10788.234: 96.3667% ( 7) 00:08:23.919 10788.234 - 10838.646: 96.4106% ( 7) 00:08:23.919 10838.646 - 10889.058: 96.4859% ( 12) 00:08:23.919 10889.058 - 10939.471: 96.5612% ( 12) 00:08:23.919 10939.471 - 10989.883: 96.6177% ( 9) 00:08:23.919 10989.883 - 11040.295: 96.6679% ( 8) 00:08:23.919 11040.295 - 11090.708: 96.7244% ( 9) 00:08:23.919 11090.708 - 11141.120: 96.7558% ( 5) 00:08:23.919 11141.120 - 11191.532: 96.7997% ( 7) 00:08:23.919 11191.532 - 11241.945: 96.8750% ( 12) 00:08:23.919 11241.945 - 11292.357: 96.9691% ( 15) 00:08:23.920 11292.357 - 11342.769: 97.0444% ( 12) 00:08:23.920 11342.769 - 11393.182: 97.0821% ( 6) 00:08:23.920 11393.182 - 11443.594: 97.1072% ( 4) 00:08:23.920 11443.594 - 11494.006: 97.1323% ( 4) 00:08:23.920 11494.006 - 11544.418: 97.1637% ( 5) 00:08:23.920 11544.418 - 11594.831: 97.1762% ( 2) 00:08:23.920 11594.831 - 11645.243: 97.2139% ( 6) 00:08:23.920 11645.243 - 11695.655: 97.2452% ( 5) 00:08:23.920 11695.655 - 11746.068: 97.2703% ( 4) 00:08:23.920 11746.068 - 11796.480: 97.3205% ( 8) 00:08:23.920 11796.480 - 11846.892: 97.4147% ( 15) 00:08:23.920 11846.892 - 11897.305: 97.5025% ( 14) 00:08:23.920 11897.305 - 11947.717: 97.5590% ( 9) 00:08:23.920 11947.717 - 11998.129: 97.6029% ( 7) 00:08:23.920 11998.129 - 12048.542: 97.6531% ( 8) 00:08:23.920 12048.542 - 12098.954: 97.7033% ( 8) 00:08:23.920 12098.954 - 12149.366: 97.7410% ( 6) 00:08:23.920 12149.366 - 12199.778: 97.7786% ( 6) 00:08:23.920 12199.778 - 12250.191: 97.8163% ( 6) 00:08:23.920 12250.191 - 12300.603: 97.8414% ( 4) 00:08:23.920 12300.603 - 12351.015: 97.8790% ( 6) 00:08:23.920 12351.015 - 12401.428: 97.9104% ( 5) 00:08:23.920 12401.428 - 12451.840: 97.9292% ( 3) 00:08:23.920 12451.840 - 12502.252: 97.9543% ( 4) 00:08:23.920 12502.252 - 12552.665: 98.0045% ( 8) 00:08:23.920 12552.665 - 12603.077: 98.0422% ( 6) 00:08:23.920 12603.077 - 12653.489: 98.0610% ( 3) 00:08:23.920 12653.489 - 12703.902: 98.0798% ( 3) 00:08:23.920 12703.902 - 12754.314: 98.0986% ( 3) 00:08:23.920 12754.314 - 12804.726: 98.1175% ( 3) 00:08:23.920 12804.726 - 12855.138: 98.1363% ( 3) 00:08:23.920 12855.138 - 12905.551: 98.1614% ( 4) 00:08:23.920 12905.551 - 13006.375: 98.1990% ( 6) 00:08:23.920 13006.375 - 13107.200: 98.2806% ( 13) 00:08:23.920 13107.200 - 13208.025: 98.4249% ( 23) 00:08:23.920 13208.025 - 13308.849: 98.5630% ( 22) 00:08:23.920 13308.849 - 13409.674: 98.6195% ( 9) 00:08:23.920 13409.674 - 13510.498: 98.6822% ( 10) 00:08:23.920 13510.498 - 13611.323: 98.7136% ( 5) 00:08:23.920 13611.323 - 13712.148: 98.7450% ( 5) 00:08:23.920 13712.148 - 13812.972: 98.7575% ( 2) 00:08:23.920 13812.972 - 13913.797: 98.7826% ( 4) 00:08:23.920 13913.797 - 14014.622: 98.7952% ( 2) 00:08:23.920 14014.622 - 14115.446: 98.8454% ( 8) 00:08:23.920 14115.446 - 14216.271: 98.9144% ( 11) 00:08:23.920 14216.271 - 14317.095: 99.0776% ( 26) 00:08:23.920 14317.095 - 14417.920: 99.1152% ( 6) 00:08:23.920 14417.920 - 14518.745: 99.1591% ( 7) 00:08:23.920 14518.745 - 14619.569: 99.1968% ( 6) 00:08:23.920 17341.834 - 17442.658: 99.2282% ( 5) 00:08:23.920 17442.658 - 17543.483: 99.2658% ( 6) 00:08:23.920 17543.483 - 17644.308: 99.3097% ( 7) 00:08:23.920 17644.308 - 17745.132: 99.3537% ( 7) 00:08:23.920 17745.132 - 17845.957: 99.3850% ( 5) 00:08:23.920 17845.957 - 17946.782: 99.4290% ( 7) 00:08:23.920 17946.782 - 18047.606: 99.4603% ( 5) 00:08:23.920 18047.606 - 18148.431: 99.4854% ( 4) 00:08:23.920 18148.431 - 18249.255: 99.5105% ( 4) 00:08:23.920 18249.255 - 18350.080: 99.5356% ( 4) 00:08:23.920 18350.080 - 18450.905: 99.5607% ( 4) 00:08:23.920 18450.905 - 18551.729: 99.5858% ( 4) 00:08:23.920 18551.729 - 18652.554: 99.5984% ( 2) 00:08:23.920 22383.065 - 22483.889: 99.6235% ( 4) 00:08:23.920 22483.889 - 22584.714: 99.6423% ( 3) 00:08:23.920 22584.714 - 22685.538: 99.6611% ( 3) 00:08:23.920 22685.538 - 22786.363: 99.7051% ( 7) 00:08:23.920 22786.363 - 22887.188: 99.7866% ( 13) 00:08:23.920 22887.188 - 22988.012: 99.8055% ( 3) 00:08:23.920 22988.012 - 23088.837: 99.8306% ( 4) 00:08:23.920 23088.837 - 23189.662: 99.8557% ( 4) 00:08:23.920 23189.662 - 23290.486: 99.8808% ( 4) 00:08:23.920 23290.486 - 23391.311: 99.9059% ( 4) 00:08:23.920 23391.311 - 23492.135: 99.9310% ( 4) 00:08:23.920 23492.135 - 23592.960: 99.9561% ( 4) 00:08:23.920 23592.960 - 23693.785: 99.9749% ( 3) 00:08:23.920 23693.785 - 23794.609: 100.0000% ( 4) 00:08:23.920 00:08:23.920 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.920 ============================================================================== 00:08:23.920 Range in us Cumulative IO count 00:08:23.920 3302.006 - 3327.212: 0.0063% ( 1) 00:08:23.920 3327.212 - 3352.418: 0.0251% ( 3) 00:08:23.920 3352.418 - 3377.625: 0.0439% ( 3) 00:08:23.920 3377.625 - 3402.831: 0.1067% ( 10) 00:08:23.920 3402.831 - 3428.037: 0.1820% ( 12) 00:08:23.920 3428.037 - 3453.243: 0.2510% ( 11) 00:08:23.920 3453.243 - 3478.449: 0.2698% ( 3) 00:08:23.920 3478.449 - 3503.655: 0.2824% ( 2) 00:08:23.920 3503.655 - 3528.862: 0.2949% ( 2) 00:08:23.920 3528.862 - 3554.068: 0.3075% ( 2) 00:08:23.920 3554.068 - 3579.274: 0.3200% ( 2) 00:08:23.920 3579.274 - 3604.480: 0.3326% ( 2) 00:08:23.920 3604.480 - 3629.686: 0.3451% ( 2) 00:08:23.920 3629.686 - 3654.892: 0.3577% ( 2) 00:08:23.920 3654.892 - 3680.098: 0.3702% ( 2) 00:08:23.920 3680.098 - 3705.305: 0.3828% ( 2) 00:08:23.920 3705.305 - 3730.511: 0.3953% ( 2) 00:08:23.920 3730.511 - 3755.717: 0.4016% ( 1) 00:08:23.920 5671.385 - 5696.591: 0.4142% ( 2) 00:08:23.920 5696.591 - 5721.797: 0.4330% ( 3) 00:08:23.920 5721.797 - 5747.003: 0.4581% ( 4) 00:08:23.920 5747.003 - 5772.209: 0.5020% ( 7) 00:08:23.920 5772.209 - 5797.415: 0.5773% ( 12) 00:08:23.920 5797.415 - 5822.622: 0.6212% ( 7) 00:08:23.920 5822.622 - 5847.828: 0.6338% ( 2) 00:08:23.920 5847.828 - 5873.034: 0.6526% ( 3) 00:08:23.920 5873.034 - 5898.240: 0.6652% ( 2) 00:08:23.920 5898.240 - 5923.446: 0.6777% ( 2) 00:08:23.920 5923.446 - 5948.652: 0.6903% ( 2) 00:08:23.920 5948.652 - 5973.858: 0.7091% ( 3) 00:08:23.920 5973.858 - 5999.065: 0.7216% ( 2) 00:08:23.920 5999.065 - 6024.271: 0.7342% ( 2) 00:08:23.920 6024.271 - 6049.477: 0.7530% ( 3) 00:08:23.920 6049.477 - 6074.683: 0.7656% ( 2) 00:08:23.920 6074.683 - 6099.889: 0.7781% ( 2) 00:08:23.920 6099.889 - 6125.095: 0.7907% ( 2) 00:08:23.920 6125.095 - 6150.302: 0.8032% ( 2) 00:08:23.920 6704.837 - 6755.249: 0.8095% ( 1) 00:08:23.920 6755.249 - 6805.662: 0.8534% ( 7) 00:08:23.920 6805.662 - 6856.074: 0.9287% ( 12) 00:08:23.920 6856.074 - 6906.486: 1.1295% ( 32) 00:08:23.920 6906.486 - 6956.898: 1.4182% ( 46) 00:08:23.920 6956.898 - 7007.311: 1.8261% ( 65) 00:08:23.920 7007.311 - 7057.723: 2.4787% ( 104) 00:08:23.920 7057.723 - 7108.135: 3.5015% ( 163) 00:08:23.920 7108.135 - 7158.548: 5.0201% ( 242) 00:08:23.920 7158.548 - 7208.960: 7.1536% ( 340) 00:08:23.920 7208.960 - 7259.372: 9.9774% ( 450) 00:08:23.920 7259.372 - 7309.785: 13.4224% ( 549) 00:08:23.920 7309.785 - 7360.197: 17.8025% ( 698) 00:08:23.920 7360.197 - 7410.609: 23.1237% ( 848) 00:08:23.920 7410.609 - 7461.022: 28.7337% ( 894) 00:08:23.920 7461.022 - 7511.434: 34.4629% ( 913) 00:08:23.920 7511.434 - 7561.846: 40.8635% ( 1020) 00:08:23.920 7561.846 - 7612.258: 46.5926% ( 913) 00:08:23.920 7612.258 - 7662.671: 51.6943% ( 813) 00:08:23.920 7662.671 - 7713.083: 56.6077% ( 783) 00:08:23.920 7713.083 - 7763.495: 61.0191% ( 703) 00:08:23.920 7763.495 - 7813.908: 65.1795% ( 663) 00:08:23.920 7813.908 - 7864.320: 68.3233% ( 501) 00:08:23.920 7864.320 - 7914.732: 71.3793% ( 487) 00:08:23.920 7914.732 - 7965.145: 74.0776% ( 430) 00:08:23.920 7965.145 - 8015.557: 76.2048% ( 339) 00:08:23.920 8015.557 - 8065.969: 78.0058% ( 287) 00:08:23.920 8065.969 - 8116.382: 79.8883% ( 300) 00:08:23.920 8116.382 - 8166.794: 81.1308% ( 198) 00:08:23.920 8166.794 - 8217.206: 82.0971% ( 154) 00:08:23.920 8217.206 - 8267.618: 82.9631% ( 138) 00:08:23.920 8267.618 - 8318.031: 83.7412% ( 124) 00:08:23.920 8318.031 - 8368.443: 84.4189% ( 108) 00:08:23.920 8368.443 - 8418.855: 85.1406% ( 115) 00:08:23.920 8418.855 - 8469.268: 85.7932% ( 104) 00:08:23.920 8469.268 - 8519.680: 86.4270% ( 101) 00:08:23.920 8519.680 - 8570.092: 87.0482% ( 99) 00:08:23.920 8570.092 - 8620.505: 87.6067% ( 89) 00:08:23.920 8620.505 - 8670.917: 87.8451% ( 38) 00:08:23.920 8670.917 - 8721.329: 88.1338% ( 46) 00:08:23.920 8721.329 - 8771.742: 88.4789% ( 55) 00:08:23.920 8771.742 - 8822.154: 88.7550% ( 44) 00:08:23.920 8822.154 - 8872.566: 89.0123% ( 41) 00:08:23.920 8872.566 - 8922.978: 89.2696% ( 41) 00:08:23.920 8922.978 - 8973.391: 89.5018% ( 37) 00:08:23.920 8973.391 - 9023.803: 89.8406% ( 54) 00:08:23.920 9023.803 - 9074.215: 90.2548% ( 66) 00:08:23.920 9074.215 - 9124.628: 90.7882% ( 85) 00:08:23.920 9124.628 - 9175.040: 91.1082% ( 51) 00:08:23.920 9175.040 - 9225.452: 91.4847% ( 60) 00:08:23.920 9225.452 - 9275.865: 91.9239% ( 70) 00:08:23.920 9275.865 - 9326.277: 92.3256% ( 64) 00:08:23.920 9326.277 - 9376.689: 92.8087% ( 77) 00:08:23.920 9376.689 - 9427.102: 93.0723% ( 42) 00:08:23.920 9427.102 - 9477.514: 93.5492% ( 76) 00:08:23.920 9477.514 - 9527.926: 93.8692% ( 51) 00:08:23.920 9527.926 - 9578.338: 94.0512% ( 29) 00:08:23.920 9578.338 - 9628.751: 94.2018% ( 24) 00:08:23.920 9628.751 - 9679.163: 94.3336% ( 21) 00:08:23.920 9679.163 - 9729.575: 94.4591% ( 20) 00:08:23.920 9729.575 - 9779.988: 94.5344% ( 12) 00:08:23.920 9779.988 - 9830.400: 94.6097% ( 12) 00:08:24.182 9830.400 - 9880.812: 94.6724% ( 10) 00:08:24.182 9880.812 - 9931.225: 94.7352% ( 10) 00:08:24.182 9931.225 - 9981.637: 94.8356% ( 16) 00:08:24.182 9981.637 - 10032.049: 94.9674% ( 21) 00:08:24.182 10032.049 - 10082.462: 95.1117% ( 23) 00:08:24.182 10082.462 - 10132.874: 95.2246% ( 18) 00:08:24.182 10132.874 - 10183.286: 95.3251% ( 16) 00:08:24.182 10183.286 - 10233.698: 95.4694% ( 23) 00:08:24.182 10233.698 - 10284.111: 95.6263% ( 25) 00:08:24.182 10284.111 - 10334.523: 95.8145% ( 30) 00:08:24.182 10334.523 - 10384.935: 95.9212% ( 17) 00:08:24.182 10384.935 - 10435.348: 96.0153% ( 15) 00:08:24.182 10435.348 - 10485.760: 96.1032% ( 14) 00:08:24.182 10485.760 - 10536.172: 96.1847% ( 13) 00:08:24.182 10536.172 - 10586.585: 96.2663% ( 13) 00:08:24.182 10586.585 - 10636.997: 96.3542% ( 14) 00:08:24.182 10636.997 - 10687.409: 96.4232% ( 11) 00:08:24.182 10687.409 - 10737.822: 96.4734% ( 8) 00:08:24.182 10737.822 - 10788.234: 96.5236% ( 8) 00:08:24.182 10788.234 - 10838.646: 96.5801% ( 9) 00:08:24.182 10838.646 - 10889.058: 96.6365% ( 9) 00:08:24.182 10889.058 - 10939.471: 96.6805% ( 7) 00:08:24.182 10939.471 - 10989.883: 96.7056% ( 4) 00:08:24.182 10989.883 - 11040.295: 96.7369% ( 5) 00:08:24.182 11040.295 - 11090.708: 96.7683% ( 5) 00:08:24.182 11090.708 - 11141.120: 96.7809% ( 2) 00:08:24.182 11141.120 - 11191.532: 96.8060% ( 4) 00:08:24.182 11191.532 - 11241.945: 96.8248% ( 3) 00:08:24.182 11241.945 - 11292.357: 96.8499% ( 4) 00:08:24.182 11292.357 - 11342.769: 96.8750% ( 4) 00:08:24.182 11342.769 - 11393.182: 96.8938% ( 3) 00:08:24.182 11393.182 - 11443.594: 96.9189% ( 4) 00:08:24.182 11443.594 - 11494.006: 96.9503% ( 5) 00:08:24.182 11494.006 - 11544.418: 96.9691% ( 3) 00:08:24.182 11544.418 - 11594.831: 96.9880% ( 3) 00:08:24.182 11594.831 - 11645.243: 97.0068% ( 3) 00:08:24.182 11645.243 - 11695.655: 97.0382% ( 5) 00:08:24.182 11695.655 - 11746.068: 97.0695% ( 5) 00:08:24.182 11746.068 - 11796.480: 97.1197% ( 8) 00:08:24.182 11796.480 - 11846.892: 97.1699% ( 8) 00:08:24.182 11846.892 - 11897.305: 97.2201% ( 8) 00:08:24.182 11897.305 - 11947.717: 97.2578% ( 6) 00:08:24.182 11947.717 - 11998.129: 97.3017% ( 7) 00:08:24.182 11998.129 - 12048.542: 97.3394% ( 6) 00:08:24.182 12048.542 - 12098.954: 97.3707% ( 5) 00:08:24.182 12098.954 - 12149.366: 97.4084% ( 6) 00:08:24.182 12149.366 - 12199.778: 97.4523% ( 7) 00:08:24.182 12199.778 - 12250.191: 97.5025% ( 8) 00:08:24.182 12250.191 - 12300.603: 97.5276% ( 4) 00:08:24.182 12300.603 - 12351.015: 97.5590% ( 5) 00:08:24.182 12351.015 - 12401.428: 97.5778% ( 3) 00:08:24.182 12401.428 - 12451.840: 97.6155% ( 6) 00:08:24.182 12451.840 - 12502.252: 97.6845% ( 11) 00:08:24.182 12502.252 - 12552.665: 97.7472% ( 10) 00:08:24.182 12552.665 - 12603.077: 97.8288% ( 13) 00:08:24.182 12603.077 - 12653.489: 97.9167% ( 14) 00:08:24.182 12653.489 - 12703.902: 97.9731% ( 9) 00:08:24.182 12703.902 - 12754.314: 98.0233% ( 8) 00:08:24.182 12754.314 - 12804.726: 98.1112% ( 14) 00:08:24.182 12804.726 - 12855.138: 98.1739% ( 10) 00:08:24.182 12855.138 - 12905.551: 98.2367% ( 10) 00:08:24.182 12905.551 - 13006.375: 98.4249% ( 30) 00:08:24.182 13006.375 - 13107.200: 98.5065% ( 13) 00:08:24.182 13107.200 - 13208.025: 98.5693% ( 10) 00:08:24.182 13208.025 - 13308.849: 98.6320% ( 10) 00:08:24.182 13308.849 - 13409.674: 98.7073% ( 12) 00:08:24.182 13409.674 - 13510.498: 98.7450% ( 6) 00:08:24.182 13510.498 - 13611.323: 98.7826% ( 6) 00:08:24.182 13611.323 - 13712.148: 98.7952% ( 2) 00:08:24.182 14115.446 - 14216.271: 98.8266% ( 5) 00:08:24.182 14216.271 - 14317.095: 98.8830% ( 9) 00:08:24.182 14317.095 - 14417.920: 98.9897% ( 17) 00:08:24.182 14417.920 - 14518.745: 99.0901% ( 16) 00:08:24.182 14518.745 - 14619.569: 99.1340% ( 7) 00:08:24.182 14619.569 - 14720.394: 99.1780% ( 7) 00:08:24.182 14720.394 - 14821.218: 99.1968% ( 3) 00:08:24.182 17039.360 - 17140.185: 99.2219% ( 4) 00:08:24.183 17140.185 - 17241.009: 99.2533% ( 5) 00:08:24.183 17241.009 - 17341.834: 99.3097% ( 9) 00:08:24.183 17341.834 - 17442.658: 99.3537% ( 7) 00:08:24.183 17442.658 - 17543.483: 99.4039% ( 8) 00:08:24.183 17543.483 - 17644.308: 99.4415% ( 6) 00:08:24.183 17644.308 - 17745.132: 99.4729% ( 5) 00:08:24.183 17745.132 - 17845.957: 99.4917% ( 3) 00:08:24.183 17845.957 - 17946.782: 99.5168% ( 4) 00:08:24.183 17946.782 - 18047.606: 99.5419% ( 4) 00:08:24.183 18047.606 - 18148.431: 99.5733% ( 5) 00:08:24.183 18148.431 - 18249.255: 99.5984% ( 4) 00:08:24.183 21979.766 - 22080.591: 99.6109% ( 2) 00:08:24.183 22080.591 - 22181.415: 99.6298% ( 3) 00:08:24.183 22181.415 - 22282.240: 99.6486% ( 3) 00:08:24.183 22282.240 - 22383.065: 99.6611% ( 2) 00:08:24.183 22383.065 - 22483.889: 99.6862% ( 4) 00:08:24.183 22483.889 - 22584.714: 99.7364% ( 8) 00:08:24.183 22584.714 - 22685.538: 99.7804% ( 7) 00:08:24.183 22685.538 - 22786.363: 99.8117% ( 5) 00:08:24.183 22786.363 - 22887.188: 99.8431% ( 5) 00:08:24.183 22887.188 - 22988.012: 99.8619% ( 3) 00:08:24.183 22988.012 - 23088.837: 99.8933% ( 5) 00:08:24.183 23088.837 - 23189.662: 99.9184% ( 4) 00:08:24.183 23189.662 - 23290.486: 99.9435% ( 4) 00:08:24.183 23290.486 - 23391.311: 99.9686% ( 4) 00:08:24.183 23391.311 - 23492.135: 99.9937% ( 4) 00:08:24.183 23492.135 - 23592.960: 100.0000% ( 1) 00:08:24.183 00:08:24.183 22:04:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:24.183 00:08:24.183 real 0m2.447s 00:08:24.183 user 0m2.184s 00:08:24.183 sys 0m0.167s 00:08:24.183 22:04:30 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.183 22:04:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.183 ************************************ 00:08:24.183 END TEST nvme_perf 00:08:24.183 ************************************ 00:08:24.183 22:04:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.183 22:04:30 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:24.183 22:04:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.183 22:04:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.183 ************************************ 00:08:24.183 START TEST nvme_hello_world 00:08:24.183 ************************************ 00:08:24.183 22:04:30 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.183 Initializing NVMe Controllers 00:08:24.183 Attached to 0000:00:10.0 00:08:24.183 Namespace ID: 1 size: 6GB 00:08:24.183 Attached to 0000:00:11.0 00:08:24.183 Namespace ID: 1 size: 5GB 00:08:24.183 Attached to 0000:00:13.0 00:08:24.183 Namespace ID: 1 size: 1GB 00:08:24.183 Attached to 0000:00:12.0 00:08:24.183 Namespace ID: 1 size: 4GB 00:08:24.183 Namespace ID: 2 size: 4GB 00:08:24.183 Namespace ID: 3 size: 4GB 00:08:24.183 Initialization complete. 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 INFO: using host memory buffer for IO 00:08:24.183 Hello world! 00:08:24.183 00:08:24.183 real 0m0.198s 00:08:24.183 user 0m0.077s 00:08:24.183 sys 0m0.082s 00:08:24.183 22:04:30 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.183 22:04:30 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:24.183 ************************************ 00:08:24.183 END TEST nvme_hello_world 00:08:24.183 ************************************ 00:08:24.444 22:04:30 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.444 22:04:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.444 22:04:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.444 22:04:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 ************************************ 00:08:24.444 START TEST nvme_sgl 00:08:24.444 ************************************ 00:08:24.444 22:04:30 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.444 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:24.444 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:24.444 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:24.444 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:24.444 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:24.444 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:24.444 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:24.444 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:24.444 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:24.444 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:24.444 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:24.444 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:24.445 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:24.445 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:24.706 NVMe Readv/Writev Request test 00:08:24.706 Attached to 0000:00:10.0 00:08:24.706 Attached to 0000:00:11.0 00:08:24.706 Attached to 0000:00:13.0 00:08:24.706 Attached to 0000:00:12.0 00:08:24.706 0000:00:10.0: build_io_request_2 test passed 00:08:24.706 0000:00:10.0: build_io_request_4 test passed 00:08:24.706 0000:00:10.0: build_io_request_5 test passed 00:08:24.706 0000:00:10.0: build_io_request_6 test passed 00:08:24.706 0000:00:10.0: build_io_request_7 test passed 00:08:24.706 0000:00:10.0: build_io_request_10 test passed 00:08:24.706 0000:00:11.0: build_io_request_2 test passed 00:08:24.706 0000:00:11.0: build_io_request_4 test passed 00:08:24.706 0000:00:11.0: build_io_request_5 test passed 00:08:24.706 0000:00:11.0: build_io_request_6 test passed 00:08:24.706 0000:00:11.0: build_io_request_7 test passed 00:08:24.706 0000:00:11.0: build_io_request_10 test passed 00:08:24.706 Cleaning up... 00:08:24.706 00:08:24.706 real 0m0.253s 00:08:24.706 user 0m0.125s 00:08:24.706 sys 0m0.085s 00:08:24.706 22:04:30 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.706 22:04:30 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:24.706 ************************************ 00:08:24.706 END TEST nvme_sgl 00:08:24.706 ************************************ 00:08:24.706 22:04:30 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.706 22:04:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.706 22:04:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.706 22:04:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.706 ************************************ 00:08:24.706 START TEST nvme_e2edp 00:08:24.706 ************************************ 00:08:24.706 22:04:30 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:24.706 NVMe Write/Read with End-to-End data protection test 00:08:24.706 Attached to 0000:00:10.0 00:08:24.706 Attached to 0000:00:11.0 00:08:24.706 Attached to 0000:00:13.0 00:08:24.706 Attached to 0000:00:12.0 00:08:24.706 Cleaning up... 00:08:24.706 00:08:24.706 real 0m0.200s 00:08:24.706 user 0m0.058s 00:08:24.706 sys 0m0.094s 00:08:24.706 22:04:31 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.706 22:04:31 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:24.706 ************************************ 00:08:24.706 END TEST nvme_e2edp 00:08:24.706 ************************************ 00:08:24.967 22:04:31 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.967 22:04:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.967 22:04:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.967 22:04:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.967 ************************************ 00:08:24.967 START TEST nvme_reserve 00:08:24.967 ************************************ 00:08:24.967 22:04:31 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:24.967 ===================================================== 00:08:24.967 NVMe Controller at PCI bus 0, device 16, function 0 00:08:24.967 ===================================================== 00:08:24.967 Reservations: Not Supported 00:08:24.967 ===================================================== 00:08:24.967 NVMe Controller at PCI bus 0, device 17, function 0 00:08:24.967 ===================================================== 00:08:24.967 Reservations: Not Supported 00:08:24.967 ===================================================== 00:08:24.967 NVMe Controller at PCI bus 0, device 19, function 0 00:08:24.967 ===================================================== 00:08:24.967 Reservations: Not Supported 00:08:24.967 ===================================================== 00:08:24.967 NVMe Controller at PCI bus 0, device 18, function 0 00:08:24.967 ===================================================== 00:08:24.967 Reservations: Not Supported 00:08:24.967 Reservation test passed 00:08:24.967 00:08:24.967 real 0m0.204s 00:08:24.967 user 0m0.072s 00:08:24.967 sys 0m0.084s 00:08:24.967 22:04:31 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.967 22:04:31 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:24.968 ************************************ 00:08:24.968 END TEST nvme_reserve 00:08:24.968 ************************************ 00:08:24.968 22:04:31 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:24.968 22:04:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.968 22:04:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.968 22:04:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.228 ************************************ 00:08:25.228 START TEST nvme_err_injection 00:08:25.228 ************************************ 00:08:25.228 22:04:31 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.228 NVMe Error Injection test 00:08:25.228 Attached to 0000:00:10.0 00:08:25.228 Attached to 0000:00:11.0 00:08:25.228 Attached to 0000:00:13.0 00:08:25.228 Attached to 0000:00:12.0 00:08:25.228 0000:00:10.0: get features failed as expected 00:08:25.228 0000:00:11.0: get features failed as expected 00:08:25.229 0000:00:13.0: get features failed as expected 00:08:25.229 0000:00:12.0: get features failed as expected 00:08:25.229 0000:00:10.0: get features successfully as expected 00:08:25.229 0000:00:11.0: get features successfully as expected 00:08:25.229 0000:00:13.0: get features successfully as expected 00:08:25.229 0000:00:12.0: get features successfully as expected 00:08:25.229 0000:00:10.0: read failed as expected 00:08:25.229 0000:00:11.0: read failed as expected 00:08:25.229 0000:00:13.0: read failed as expected 00:08:25.229 0000:00:12.0: read failed as expected 00:08:25.229 0000:00:10.0: read successfully as expected 00:08:25.229 0000:00:11.0: read successfully as expected 00:08:25.229 0000:00:13.0: read successfully as expected 00:08:25.229 0000:00:12.0: read successfully as expected 00:08:25.229 Cleaning up... 00:08:25.229 00:08:25.229 real 0m0.210s 00:08:25.229 user 0m0.080s 00:08:25.229 sys 0m0.081s 00:08:25.229 22:04:31 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.229 22:04:31 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:25.229 ************************************ 00:08:25.229 END TEST nvme_err_injection 00:08:25.229 ************************************ 00:08:25.229 22:04:31 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.229 22:04:31 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:25.229 22:04:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.229 22:04:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.229 ************************************ 00:08:25.229 START TEST nvme_overhead 00:08:25.229 ************************************ 00:08:25.229 22:04:31 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:26.613 Initializing NVMe Controllers 00:08:26.613 Attached to 0000:00:10.0 00:08:26.613 Attached to 0000:00:11.0 00:08:26.613 Attached to 0000:00:13.0 00:08:26.613 Attached to 0000:00:12.0 00:08:26.613 Initialization complete. Launching workers. 00:08:26.613 submit (in ns) avg, min, max = 12004.8, 10330.8, 1202366.9 00:08:26.613 complete (in ns) avg, min, max = 8303.1, 7222.3, 300814.6 00:08:26.613 00:08:26.613 Submit histogram 00:08:26.613 ================ 00:08:26.613 Range in us Cumulative Count 00:08:26.613 10.289 - 10.338: 0.0093% ( 1) 00:08:26.613 10.732 - 10.782: 0.0186% ( 1) 00:08:26.613 10.782 - 10.831: 0.0651% ( 5) 00:08:26.613 10.831 - 10.880: 0.2512% ( 20) 00:08:26.614 10.880 - 10.929: 0.5396% ( 31) 00:08:26.614 10.929 - 10.978: 0.9025% ( 39) 00:08:26.614 10.978 - 11.028: 1.4514% ( 59) 00:08:26.614 11.028 - 11.077: 2.7261% ( 137) 00:08:26.614 11.077 - 11.126: 5.2382% ( 270) 00:08:26.614 11.126 - 11.175: 8.4574% ( 346) 00:08:26.614 11.175 - 11.225: 12.1511% ( 397) 00:08:26.614 11.225 - 11.274: 15.7518% ( 387) 00:08:26.614 11.274 - 11.323: 18.8686% ( 335) 00:08:26.614 11.323 - 11.372: 22.7298% ( 415) 00:08:26.614 11.372 - 11.422: 29.8474% ( 765) 00:08:26.614 11.422 - 11.471: 40.8634% ( 1184) 00:08:26.614 11.471 - 11.520: 54.0938% ( 1422) 00:08:26.614 11.520 - 11.569: 65.2680% ( 1201) 00:08:26.614 11.569 - 11.618: 73.6230% ( 898) 00:08:26.614 11.618 - 11.668: 78.7681% ( 553) 00:08:26.614 11.668 - 11.717: 81.9501% ( 342) 00:08:26.614 11.717 - 11.766: 84.1366% ( 235) 00:08:26.614 11.766 - 11.815: 85.3647% ( 132) 00:08:26.614 11.815 - 11.865: 86.3230% ( 103) 00:08:26.614 11.865 - 11.914: 87.2627% ( 101) 00:08:26.614 11.914 - 11.963: 88.1559% ( 96) 00:08:26.614 11.963 - 12.012: 88.9282% ( 83) 00:08:26.614 12.012 - 12.062: 90.0261% ( 118) 00:08:26.614 12.062 - 12.111: 91.0681% ( 112) 00:08:26.614 12.111 - 12.160: 92.1288% ( 114) 00:08:26.614 12.160 - 12.209: 92.7894% ( 71) 00:08:26.614 12.209 - 12.258: 93.3569% ( 61) 00:08:26.614 12.258 - 12.308: 93.6918% ( 36) 00:08:26.614 12.308 - 12.357: 94.0826% ( 42) 00:08:26.614 12.357 - 12.406: 94.3524% ( 29) 00:08:26.614 12.406 - 12.455: 94.5199% ( 18) 00:08:26.614 12.455 - 12.505: 94.6688% ( 16) 00:08:26.614 12.505 - 12.554: 94.7432% ( 8) 00:08:26.614 12.554 - 12.603: 94.8083% ( 7) 00:08:26.614 12.603 - 12.702: 94.8735% ( 7) 00:08:26.614 12.702 - 12.800: 94.9293% ( 6) 00:08:26.614 12.800 - 12.898: 94.9479% ( 2) 00:08:26.614 12.898 - 12.997: 94.9758% ( 3) 00:08:26.614 12.997 - 13.095: 95.0037% ( 3) 00:08:26.614 13.095 - 13.194: 95.0409% ( 4) 00:08:26.614 13.194 - 13.292: 95.0595% ( 2) 00:08:26.614 13.292 - 13.391: 95.1433% ( 9) 00:08:26.614 13.391 - 13.489: 95.1805% ( 4) 00:08:26.614 13.489 - 13.588: 95.2177% ( 4) 00:08:26.614 13.588 - 13.686: 95.3945% ( 19) 00:08:26.614 13.686 - 13.785: 95.5713% ( 19) 00:08:26.614 13.785 - 13.883: 95.8039% ( 25) 00:08:26.614 13.883 - 13.982: 95.9527% ( 16) 00:08:26.614 13.982 - 14.080: 96.1574% ( 22) 00:08:26.614 14.080 - 14.178: 96.3249% ( 18) 00:08:26.614 14.178 - 14.277: 96.5296% ( 22) 00:08:26.614 14.277 - 14.375: 96.7343% ( 22) 00:08:26.614 14.375 - 14.474: 96.8645% ( 14) 00:08:26.614 14.474 - 14.572: 96.9390% ( 8) 00:08:26.614 14.572 - 14.671: 96.9855% ( 5) 00:08:26.614 14.671 - 14.769: 97.0227% ( 4) 00:08:26.614 14.769 - 14.868: 97.0692% ( 5) 00:08:26.614 14.868 - 14.966: 97.1064% ( 4) 00:08:26.614 14.966 - 15.065: 97.1530% ( 5) 00:08:26.614 15.065 - 15.163: 97.2088% ( 6) 00:08:26.614 15.163 - 15.262: 97.2181% ( 1) 00:08:26.614 15.262 - 15.360: 97.2367% ( 2) 00:08:26.614 15.360 - 15.458: 97.2739% ( 4) 00:08:26.614 15.458 - 15.557: 97.3204% ( 5) 00:08:26.614 15.557 - 15.655: 97.3670% ( 5) 00:08:26.614 15.655 - 15.754: 97.4414% ( 8) 00:08:26.614 15.754 - 15.852: 97.4879% ( 5) 00:08:26.614 15.852 - 15.951: 97.5344% ( 5) 00:08:26.614 15.951 - 16.049: 97.5530% ( 2) 00:08:26.614 16.049 - 16.148: 97.6182% ( 7) 00:08:26.614 16.148 - 16.246: 97.6368% ( 2) 00:08:26.614 16.246 - 16.345: 97.6740% ( 4) 00:08:26.614 16.345 - 16.443: 97.7019% ( 3) 00:08:26.614 16.443 - 16.542: 97.7205% ( 2) 00:08:26.614 16.542 - 16.640: 97.7391% ( 2) 00:08:26.614 16.640 - 16.738: 97.7484% ( 1) 00:08:26.614 16.738 - 16.837: 97.7856% ( 4) 00:08:26.614 16.837 - 16.935: 97.8135% ( 3) 00:08:26.614 16.935 - 17.034: 97.8601% ( 5) 00:08:26.614 17.034 - 17.132: 97.8880% ( 3) 00:08:26.614 17.132 - 17.231: 97.9252% ( 4) 00:08:26.614 17.231 - 17.329: 98.0555% ( 14) 00:08:26.614 17.329 - 17.428: 98.1671% ( 12) 00:08:26.614 17.428 - 17.526: 98.2415% ( 8) 00:08:26.614 17.526 - 17.625: 98.3625% ( 13) 00:08:26.614 17.625 - 17.723: 98.4648% ( 11) 00:08:26.614 17.723 - 17.822: 98.5672% ( 11) 00:08:26.614 17.822 - 17.920: 98.6230% ( 6) 00:08:26.614 17.920 - 18.018: 98.6509% ( 3) 00:08:26.614 18.018 - 18.117: 98.6974% ( 5) 00:08:26.614 18.117 - 18.215: 98.7253% ( 3) 00:08:26.614 18.215 - 18.314: 98.7719% ( 5) 00:08:26.614 18.314 - 18.412: 98.8091% ( 4) 00:08:26.614 18.412 - 18.511: 98.8184% ( 1) 00:08:26.614 18.511 - 18.609: 98.8742% ( 6) 00:08:26.614 18.609 - 18.708: 98.8928% ( 2) 00:08:26.614 18.708 - 18.806: 98.9207% ( 3) 00:08:26.614 18.806 - 18.905: 98.9300% ( 1) 00:08:26.614 19.003 - 19.102: 98.9486% ( 2) 00:08:26.614 19.102 - 19.200: 98.9579% ( 1) 00:08:26.614 19.200 - 19.298: 98.9672% ( 1) 00:08:26.614 19.397 - 19.495: 98.9766% ( 1) 00:08:26.614 19.594 - 19.692: 98.9952% ( 2) 00:08:26.614 19.692 - 19.791: 99.0045% ( 1) 00:08:26.614 19.889 - 19.988: 99.0231% ( 2) 00:08:26.614 20.086 - 20.185: 99.0324% ( 1) 00:08:26.614 20.382 - 20.480: 99.0417% ( 1) 00:08:26.614 20.775 - 20.874: 99.0510% ( 1) 00:08:26.614 20.874 - 20.972: 99.0603% ( 1) 00:08:26.614 21.268 - 21.366: 99.0696% ( 1) 00:08:26.614 21.366 - 21.465: 99.0789% ( 1) 00:08:26.614 22.449 - 22.548: 99.0975% ( 2) 00:08:26.614 22.745 - 22.843: 99.1068% ( 1) 00:08:26.614 23.138 - 23.237: 99.1161% ( 1) 00:08:26.614 23.434 - 23.532: 99.1254% ( 1) 00:08:26.614 29.735 - 29.932: 99.1440% ( 2) 00:08:26.614 29.932 - 30.129: 99.1626% ( 2) 00:08:26.614 30.129 - 30.326: 99.2371% ( 8) 00:08:26.614 30.326 - 30.523: 99.4045% ( 18) 00:08:26.614 30.523 - 30.720: 99.5999% ( 21) 00:08:26.614 30.720 - 30.917: 99.7302% ( 14) 00:08:26.614 30.917 - 31.114: 99.7674% ( 4) 00:08:26.614 31.114 - 31.311: 99.7860% ( 2) 00:08:26.614 31.311 - 31.508: 99.8046% ( 2) 00:08:26.614 31.508 - 31.705: 99.8139% ( 1) 00:08:26.614 32.295 - 32.492: 99.8325% ( 2) 00:08:26.614 34.462 - 34.658: 99.8418% ( 1) 00:08:26.614 37.022 - 37.218: 99.8511% ( 1) 00:08:26.614 38.597 - 38.794: 99.8604% ( 1) 00:08:26.614 48.049 - 48.246: 99.8697% ( 1) 00:08:26.614 50.412 - 50.806: 99.8884% ( 2) 00:08:26.614 54.745 - 55.138: 99.8977% ( 1) 00:08:26.614 57.895 - 58.289: 99.9070% ( 1) 00:08:26.614 58.289 - 58.683: 99.9163% ( 1) 00:08:26.614 59.471 - 59.865: 99.9256% ( 1) 00:08:26.614 61.046 - 61.440: 99.9349% ( 1) 00:08:26.614 65.378 - 65.772: 99.9442% ( 1) 00:08:26.614 68.529 - 68.923: 99.9535% ( 1) 00:08:26.614 68.923 - 69.317: 99.9628% ( 1) 00:08:26.614 82.314 - 82.708: 99.9721% ( 1) 00:08:26.614 84.283 - 84.677: 99.9814% ( 1) 00:08:26.614 96.098 - 96.492: 99.9907% ( 1) 00:08:26.614 1197.292 - 1203.594: 100.0000% ( 1) 00:08:26.614 00:08:26.614 Complete histogram 00:08:26.614 ================== 00:08:26.614 Range in us Cumulative Count 00:08:26.614 7.188 - 7.237: 0.0186% ( 2) 00:08:26.614 7.237 - 7.286: 0.2512% ( 25) 00:08:26.614 7.286 - 7.335: 1.2933% ( 112) 00:08:26.614 7.335 - 7.385: 3.8147% ( 271) 00:08:26.614 7.385 - 7.434: 7.3409% ( 379) 00:08:26.614 7.434 - 7.483: 11.2114% ( 416) 00:08:26.614 7.483 - 7.532: 14.5050% ( 354) 00:08:26.614 7.532 - 7.582: 16.4775% ( 212) 00:08:26.614 7.582 - 7.631: 17.5661% ( 117) 00:08:26.614 7.631 - 7.680: 18.3569% ( 85) 00:08:26.614 7.680 - 7.729: 18.8128% ( 49) 00:08:26.614 7.729 - 7.778: 19.0361% ( 24) 00:08:26.614 7.778 - 7.828: 19.2315% ( 21) 00:08:26.614 7.828 - 7.877: 19.7618% ( 57) 00:08:26.614 7.877 - 7.926: 22.8787% ( 335) 00:08:26.614 7.926 - 7.975: 31.7361% ( 952) 00:08:26.614 7.975 - 8.025: 43.3662% ( 1250) 00:08:26.614 8.025 - 8.074: 52.6517% ( 998) 00:08:26.614 8.074 - 8.123: 60.7369% ( 869) 00:08:26.614 8.123 - 8.172: 69.2687% ( 917) 00:08:26.614 8.172 - 8.222: 77.0190% ( 833) 00:08:26.614 8.222 - 8.271: 82.7503% ( 616) 00:08:26.614 8.271 - 8.320: 86.6394% ( 418) 00:08:26.614 8.320 - 8.369: 89.6353% ( 322) 00:08:26.614 8.369 - 8.418: 91.8310% ( 236) 00:08:26.614 8.418 - 8.468: 93.3662% ( 165) 00:08:26.614 8.468 - 8.517: 94.1198% ( 81) 00:08:26.614 8.517 - 8.566: 94.5757% ( 49) 00:08:26.615 8.566 - 8.615: 94.9014% ( 35) 00:08:26.615 8.615 - 8.665: 95.0689% ( 18) 00:08:26.615 8.665 - 8.714: 95.2642% ( 21) 00:08:26.615 8.714 - 8.763: 95.4596% ( 21) 00:08:26.615 8.763 - 8.812: 95.5992% ( 15) 00:08:26.615 8.812 - 8.862: 95.7760% ( 19) 00:08:26.615 8.862 - 8.911: 95.9713% ( 21) 00:08:26.615 8.911 - 8.960: 96.1574% ( 20) 00:08:26.615 8.960 - 9.009: 96.2505% ( 10) 00:08:26.615 9.009 - 9.058: 96.3714% ( 13) 00:08:26.615 9.058 - 9.108: 96.4738% ( 11) 00:08:26.615 9.108 - 9.157: 96.5203% ( 5) 00:08:26.615 9.157 - 9.206: 96.5575% ( 4) 00:08:26.615 9.206 - 9.255: 96.6040% ( 5) 00:08:26.615 9.255 - 9.305: 96.6505% ( 5) 00:08:26.615 9.305 - 9.354: 96.6598% ( 1) 00:08:26.615 9.354 - 9.403: 96.6691% ( 1) 00:08:26.615 9.403 - 9.452: 96.6971% ( 3) 00:08:26.615 9.452 - 9.502: 96.7250% ( 3) 00:08:26.615 9.502 - 9.551: 96.7436% ( 2) 00:08:26.615 9.551 - 9.600: 96.7529% ( 1) 00:08:26.615 9.649 - 9.698: 96.7808% ( 3) 00:08:26.615 9.698 - 9.748: 96.7901% ( 1) 00:08:26.615 9.748 - 9.797: 96.7994% ( 1) 00:08:26.615 9.797 - 9.846: 96.8180% ( 2) 00:08:26.615 9.846 - 9.895: 96.8273% ( 1) 00:08:26.615 9.895 - 9.945: 96.8552% ( 3) 00:08:26.615 9.945 - 9.994: 96.8645% ( 1) 00:08:26.615 10.092 - 10.142: 96.8738% ( 1) 00:08:26.615 10.240 - 10.289: 96.9017% ( 3) 00:08:26.615 10.289 - 10.338: 96.9111% ( 1) 00:08:26.615 10.338 - 10.388: 96.9390% ( 3) 00:08:26.615 10.437 - 10.486: 96.9576% ( 2) 00:08:26.615 10.535 - 10.585: 96.9855% ( 3) 00:08:26.615 10.634 - 10.683: 96.9948% ( 1) 00:08:26.615 10.732 - 10.782: 97.0041% ( 1) 00:08:26.615 10.782 - 10.831: 97.0227% ( 2) 00:08:26.615 10.880 - 10.929: 97.0506% ( 3) 00:08:26.615 10.929 - 10.978: 97.0599% ( 1) 00:08:26.615 10.978 - 11.028: 97.0692% ( 1) 00:08:26.615 11.175 - 11.225: 97.1064% ( 4) 00:08:26.615 11.225 - 11.274: 97.1157% ( 1) 00:08:26.615 11.274 - 11.323: 97.1250% ( 1) 00:08:26.615 11.323 - 11.372: 97.1344% ( 1) 00:08:26.615 11.372 - 11.422: 97.1809% ( 5) 00:08:26.615 11.422 - 11.471: 97.2088% ( 3) 00:08:26.615 11.471 - 11.520: 97.2181% ( 1) 00:08:26.615 11.520 - 11.569: 97.2367% ( 2) 00:08:26.615 11.569 - 11.618: 97.2553% ( 2) 00:08:26.615 11.618 - 11.668: 97.3018% ( 5) 00:08:26.615 11.668 - 11.717: 97.3390% ( 4) 00:08:26.615 11.717 - 11.766: 97.3763% ( 4) 00:08:26.615 11.766 - 11.815: 97.3856% ( 1) 00:08:26.615 11.815 - 11.865: 97.4228% ( 4) 00:08:26.615 11.865 - 11.914: 97.4321% ( 1) 00:08:26.615 11.914 - 11.963: 97.4507% ( 2) 00:08:26.615 11.963 - 12.012: 97.5065% ( 6) 00:08:26.615 12.012 - 12.062: 97.5344% ( 3) 00:08:26.615 12.111 - 12.160: 97.5902% ( 6) 00:08:26.615 12.160 - 12.209: 97.6461% ( 6) 00:08:26.615 12.209 - 12.258: 97.6926% ( 5) 00:08:26.615 12.258 - 12.308: 97.7019% ( 1) 00:08:26.615 12.308 - 12.357: 97.7298% ( 3) 00:08:26.615 12.357 - 12.406: 97.7391% ( 1) 00:08:26.615 12.455 - 12.505: 97.7484% ( 1) 00:08:26.615 12.554 - 12.603: 97.7577% ( 1) 00:08:26.615 12.702 - 12.800: 97.7763% ( 2) 00:08:26.615 12.898 - 12.997: 97.8042% ( 3) 00:08:26.615 12.997 - 13.095: 97.8229% ( 2) 00:08:26.615 13.095 - 13.194: 97.8415% ( 2) 00:08:26.615 13.194 - 13.292: 97.8508% ( 1) 00:08:26.615 13.292 - 13.391: 97.8694% ( 2) 00:08:26.615 13.391 - 13.489: 97.9252% ( 6) 00:08:26.615 13.489 - 13.588: 97.9717% ( 5) 00:08:26.615 13.588 - 13.686: 98.0368% ( 7) 00:08:26.615 13.686 - 13.785: 98.1020% ( 7) 00:08:26.615 13.785 - 13.883: 98.1578% ( 6) 00:08:26.615 13.883 - 13.982: 98.2322% ( 8) 00:08:26.615 13.982 - 14.080: 98.2787% ( 5) 00:08:26.615 14.080 - 14.178: 98.3253% ( 5) 00:08:26.615 14.178 - 14.277: 98.4462% ( 13) 00:08:26.615 14.277 - 14.375: 98.5579% ( 12) 00:08:26.615 14.375 - 14.474: 98.6137% ( 6) 00:08:26.615 14.474 - 14.572: 98.6788% ( 7) 00:08:26.615 14.572 - 14.671: 98.7160% ( 4) 00:08:26.615 14.671 - 14.769: 98.7533% ( 4) 00:08:26.615 14.769 - 14.868: 98.8091% ( 6) 00:08:26.615 14.868 - 14.966: 98.8742% ( 7) 00:08:26.615 14.966 - 15.065: 98.9393% ( 7) 00:08:26.615 15.065 - 15.163: 98.9766% ( 4) 00:08:26.615 15.163 - 15.262: 99.0045% ( 3) 00:08:26.615 15.262 - 15.360: 99.0138% ( 1) 00:08:26.615 15.360 - 15.458: 99.0231% ( 1) 00:08:26.615 15.458 - 15.557: 99.0324% ( 1) 00:08:26.615 15.557 - 15.655: 99.0603% ( 3) 00:08:26.615 15.754 - 15.852: 99.0789% ( 2) 00:08:26.615 16.345 - 16.443: 99.0882% ( 1) 00:08:26.615 16.542 - 16.640: 99.0975% ( 1) 00:08:26.615 16.935 - 17.034: 99.1068% ( 1) 00:08:26.615 17.132 - 17.231: 99.1161% ( 1) 00:08:26.615 17.428 - 17.526: 99.1254% ( 1) 00:08:26.615 17.526 - 17.625: 99.1347% ( 1) 00:08:26.615 17.920 - 18.018: 99.1440% ( 1) 00:08:26.615 18.708 - 18.806: 99.1533% ( 1) 00:08:26.615 22.351 - 22.449: 99.1626% ( 1) 00:08:26.615 22.449 - 22.548: 99.2371% ( 8) 00:08:26.615 22.548 - 22.646: 99.3673% ( 14) 00:08:26.615 22.646 - 22.745: 99.5348% ( 18) 00:08:26.615 22.745 - 22.843: 99.6185% ( 9) 00:08:26.615 22.843 - 22.942: 99.6651% ( 5) 00:08:26.615 22.942 - 23.040: 99.7116% ( 5) 00:08:26.615 23.040 - 23.138: 99.7395% ( 3) 00:08:26.615 23.138 - 23.237: 99.7581% ( 2) 00:08:26.615 23.237 - 23.335: 99.7860% ( 3) 00:08:26.615 23.335 - 23.434: 99.8232% ( 4) 00:08:26.615 23.532 - 23.631: 99.8325% ( 1) 00:08:26.615 23.729 - 23.828: 99.8418% ( 1) 00:08:26.615 24.123 - 24.222: 99.8511% ( 1) 00:08:26.615 24.222 - 24.320: 99.8604% ( 1) 00:08:26.615 25.994 - 26.191: 99.8697% ( 1) 00:08:26.615 26.388 - 26.585: 99.8790% ( 1) 00:08:26.615 27.766 - 27.963: 99.8884% ( 1) 00:08:26.615 29.735 - 29.932: 99.8977% ( 1) 00:08:26.615 35.840 - 36.037: 99.9163% ( 2) 00:08:26.615 36.825 - 37.022: 99.9256% ( 1) 00:08:26.615 39.385 - 39.582: 99.9349% ( 1) 00:08:26.615 42.142 - 42.338: 99.9442% ( 1) 00:08:26.615 45.095 - 45.292: 99.9535% ( 1) 00:08:26.615 45.292 - 45.489: 99.9628% ( 1) 00:08:26.615 48.640 - 48.837: 99.9721% ( 1) 00:08:26.615 54.351 - 54.745: 99.9814% ( 1) 00:08:26.615 55.138 - 55.532: 99.9907% ( 1) 00:08:26.615 299.323 - 300.898: 100.0000% ( 1) 00:08:26.615 00:08:26.615 00:08:26.615 real 0m1.207s 00:08:26.615 user 0m1.067s 00:08:26.615 sys 0m0.087s 00:08:26.615 22:04:32 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.615 22:04:32 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:26.615 ************************************ 00:08:26.615 END TEST nvme_overhead 00:08:26.615 ************************************ 00:08:26.615 22:04:32 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:26.615 22:04:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.615 22:04:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.615 22:04:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.615 ************************************ 00:08:26.615 START TEST nvme_arbitration 00:08:26.615 ************************************ 00:08:26.615 22:04:32 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:29.917 Initializing NVMe Controllers 00:08:29.917 Attached to 0000:00:10.0 00:08:29.917 Attached to 0000:00:11.0 00:08:29.917 Attached to 0000:00:13.0 00:08:29.917 Attached to 0000:00:12.0 00:08:29.917 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:29.917 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:29.917 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:29.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:29.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:29.917 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:29.917 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:29.917 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:29.917 Initialization complete. Launching workers. 00:08:29.917 Starting thread on core 1 with urgent priority queue 00:08:29.917 Starting thread on core 2 with urgent priority queue 00:08:29.917 Starting thread on core 3 with urgent priority queue 00:08:29.917 Starting thread on core 0 with urgent priority queue 00:08:29.917 QEMU NVMe Ctrl (12340 ) core 0: 6954.67 IO/s 14.38 secs/100000 ios 00:08:29.917 QEMU NVMe Ctrl (12342 ) core 0: 6954.67 IO/s 14.38 secs/100000 ios 00:08:29.917 QEMU NVMe Ctrl (12341 ) core 1: 6954.67 IO/s 14.38 secs/100000 ios 00:08:29.917 QEMU NVMe Ctrl (12342 ) core 1: 6954.67 IO/s 14.38 secs/100000 ios 00:08:29.917 QEMU NVMe Ctrl (12343 ) core 2: 6378.67 IO/s 15.68 secs/100000 ios 00:08:29.917 QEMU NVMe Ctrl (12342 ) core 3: 6528.00 IO/s 15.32 secs/100000 ios 00:08:29.917 ======================================================== 00:08:29.917 00:08:29.917 00:08:29.917 real 0m3.221s 00:08:29.917 user 0m9.023s 00:08:29.917 sys 0m0.106s 00:08:29.917 22:04:36 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.917 22:04:36 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:29.917 ************************************ 00:08:29.917 END TEST nvme_arbitration 00:08:29.917 ************************************ 00:08:29.917 22:04:36 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:29.917 22:04:36 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.917 22:04:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.917 22:04:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.917 ************************************ 00:08:29.917 START TEST nvme_single_aen 00:08:29.917 ************************************ 00:08:29.917 22:04:36 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:30.177 Asynchronous Event Request test 00:08:30.177 Attached to 0000:00:10.0 00:08:30.177 Attached to 0000:00:11.0 00:08:30.178 Attached to 0000:00:13.0 00:08:30.178 Attached to 0000:00:12.0 00:08:30.178 Reset controller to setup AER completions for this process 00:08:30.178 Registering asynchronous event callbacks... 00:08:30.178 Getting orig temperature thresholds of all controllers 00:08:30.178 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.178 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.178 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.178 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.178 Setting all controllers temperature threshold low to trigger AER 00:08:30.178 Waiting for all controllers temperature threshold to be set lower 00:08:30.178 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.178 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:30.178 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.178 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:30.178 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.178 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:30.178 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.178 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:30.178 Waiting for all controllers to trigger AER and reset threshold 00:08:30.178 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.178 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.178 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.178 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.178 Cleaning up... 00:08:30.178 ************************************ 00:08:30.178 END TEST nvme_single_aen 00:08:30.178 ************************************ 00:08:30.178 00:08:30.178 real 0m0.228s 00:08:30.178 user 0m0.075s 00:08:30.178 sys 0m0.098s 00:08:30.178 22:04:36 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.178 22:04:36 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:30.178 22:04:36 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:30.178 22:04:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.178 22:04:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.178 22:04:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.178 ************************************ 00:08:30.178 START TEST nvme_doorbell_aers 00:08:30.178 ************************************ 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.178 22:04:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:30.438 [2024-12-16 22:04:36.604217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:08:40.415 Executing: test_write_invalid_db 00:08:40.415 Waiting for AER completion... 00:08:40.415 Failure: test_write_invalid_db 00:08:40.415 00:08:40.415 Executing: test_invalid_db_write_overflow_sq 00:08:40.415 Waiting for AER completion... 00:08:40.415 Failure: test_invalid_db_write_overflow_sq 00:08:40.415 00:08:40.415 Executing: test_invalid_db_write_overflow_cq 00:08:40.415 Waiting for AER completion... 00:08:40.415 Failure: test_invalid_db_write_overflow_cq 00:08:40.415 00:08:40.415 22:04:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:40.415 22:04:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:40.415 [2024-12-16 22:04:46.630375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:08:50.387 Executing: test_write_invalid_db 00:08:50.387 Waiting for AER completion... 00:08:50.387 Failure: test_write_invalid_db 00:08:50.387 00:08:50.387 Executing: test_invalid_db_write_overflow_sq 00:08:50.387 Waiting for AER completion... 00:08:50.387 Failure: test_invalid_db_write_overflow_sq 00:08:50.387 00:08:50.387 Executing: test_invalid_db_write_overflow_cq 00:08:50.387 Waiting for AER completion... 00:08:50.387 Failure: test_invalid_db_write_overflow_cq 00:08:50.387 00:08:50.387 22:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.387 22:04:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.387 [2024-12-16 22:04:56.670390] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:00.355 Executing: test_write_invalid_db 00:09:00.355 Waiting for AER completion... 00:09:00.355 Failure: test_write_invalid_db 00:09:00.355 00:09:00.355 Executing: test_invalid_db_write_overflow_sq 00:09:00.355 Waiting for AER completion... 00:09:00.355 Failure: test_invalid_db_write_overflow_sq 00:09:00.355 00:09:00.355 Executing: test_invalid_db_write_overflow_cq 00:09:00.355 Waiting for AER completion... 00:09:00.355 Failure: test_invalid_db_write_overflow_cq 00:09:00.355 00:09:00.355 22:05:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.355 22:05:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.355 [2024-12-16 22:05:06.679226] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.352 Executing: test_write_invalid_db 00:09:10.352 Waiting for AER completion... 00:09:10.352 Failure: test_write_invalid_db 00:09:10.352 00:09:10.352 Executing: test_invalid_db_write_overflow_sq 00:09:10.352 Waiting for AER completion... 00:09:10.352 Failure: test_invalid_db_write_overflow_sq 00:09:10.352 00:09:10.352 Executing: test_invalid_db_write_overflow_cq 00:09:10.352 Waiting for AER completion... 00:09:10.352 Failure: test_invalid_db_write_overflow_cq 00:09:10.352 00:09:10.352 00:09:10.352 real 0m40.174s 00:09:10.352 user 0m34.210s 00:09:10.352 sys 0m5.609s 00:09:10.352 22:05:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.352 ************************************ 00:09:10.352 END TEST nvme_doorbell_aers 00:09:10.352 ************************************ 00:09:10.352 22:05:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:10.352 22:05:16 nvme -- nvme/nvme.sh@97 -- # uname 00:09:10.352 22:05:16 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:10.352 22:05:16 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.352 22:05:16 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:10.352 22:05:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.352 22:05:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.352 ************************************ 00:09:10.352 START TEST nvme_multi_aen 00:09:10.352 ************************************ 00:09:10.352 22:05:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.610 [2024-12-16 22:05:16.726175] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.610 [2024-12-16 22:05:16.726230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.610 [2024-12-16 22:05:16.726241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.610 [2024-12-16 22:05:16.727291] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.610 [2024-12-16 22:05:16.727314] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.727321] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.728177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.728197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.728203] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.729130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.729181] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 [2024-12-16 22:05:16.729206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76783) is not found. Dropping the request. 00:09:10.611 Child process pid: 77303 00:09:10.611 [Child] Asynchronous Event Request test 00:09:10.611 [Child] Attached to 0000:00:10.0 00:09:10.611 [Child] Attached to 0000:00:11.0 00:09:10.611 [Child] Attached to 0000:00:13.0 00:09:10.611 [Child] Attached to 0000:00:12.0 00:09:10.611 [Child] Registering asynchronous event callbacks... 00:09:10.611 [Child] Getting orig temperature thresholds of all controllers 00:09:10.611 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:10.611 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 [Child] Cleaning up... 00:09:10.611 Asynchronous Event Request test 00:09:10.611 Attached to 0000:00:10.0 00:09:10.611 Attached to 0000:00:11.0 00:09:10.611 Attached to 0000:00:13.0 00:09:10.611 Attached to 0000:00:12.0 00:09:10.611 Reset controller to setup AER completions for this process 00:09:10.611 Registering asynchronous event callbacks... 00:09:10.611 Getting orig temperature thresholds of all controllers 00:09:10.611 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.611 Setting all controllers temperature threshold low to trigger AER 00:09:10.611 Waiting for all controllers temperature threshold to be set lower 00:09:10.611 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:10.611 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:10.611 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:10.611 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.611 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:10.611 Waiting for all controllers to trigger AER and reset threshold 00:09:10.611 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.611 Cleaning up... 00:09:10.611 ************************************ 00:09:10.611 END TEST nvme_multi_aen 00:09:10.611 ************************************ 00:09:10.611 00:09:10.611 real 0m0.380s 00:09:10.611 user 0m0.127s 00:09:10.611 sys 0m0.152s 00:09:10.611 22:05:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.611 22:05:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:10.869 22:05:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.869 22:05:16 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:10.869 22:05:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.869 22:05:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.869 ************************************ 00:09:10.869 START TEST nvme_startup 00:09:10.869 ************************************ 00:09:10.870 22:05:16 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:10.870 Initializing NVMe Controllers 00:09:10.870 Attached to 0000:00:10.0 00:09:10.870 Attached to 0000:00:11.0 00:09:10.870 Attached to 0000:00:13.0 00:09:10.870 Attached to 0000:00:12.0 00:09:10.870 Initialization complete. 00:09:10.870 Time used:124416.016 (us). 00:09:10.870 00:09:10.870 real 0m0.175s 00:09:10.870 user 0m0.066s 00:09:10.870 sys 0m0.062s 00:09:10.870 22:05:17 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.870 ************************************ 00:09:10.870 END TEST nvme_startup 00:09:10.870 ************************************ 00:09:10.870 22:05:17 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:10.870 22:05:17 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:10.870 22:05:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.870 22:05:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.870 22:05:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.870 ************************************ 00:09:10.870 START TEST nvme_multi_secondary 00:09:10.870 ************************************ 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77359 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77360 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:10.870 22:05:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:14.184 Initializing NVMe Controllers 00:09:14.184 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.184 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.184 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.184 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.184 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:14.184 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:14.184 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:14.184 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:14.184 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:14.184 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:14.184 Initialization complete. Launching workers. 00:09:14.184 ======================================================== 00:09:14.184 Latency(us) 00:09:14.184 Device Information : IOPS MiB/s Average min max 00:09:14.184 PCIE (0000:00:10.0) NSID 1 from core 1: 7817.41 30.54 2045.35 717.68 7438.77 00:09:14.184 PCIE (0000:00:11.0) NSID 1 from core 1: 7817.41 30.54 2046.55 734.85 7165.94 00:09:14.184 PCIE (0000:00:13.0) NSID 1 from core 1: 7817.41 30.54 2046.68 727.44 7050.31 00:09:14.184 PCIE (0000:00:12.0) NSID 1 from core 1: 7817.41 30.54 2046.87 737.41 7171.19 00:09:14.184 PCIE (0000:00:12.0) NSID 2 from core 1: 7817.41 30.54 2046.88 728.88 7221.64 00:09:14.184 PCIE (0000:00:12.0) NSID 3 from core 1: 7817.41 30.54 2046.96 740.04 7384.80 00:09:14.184 ======================================================== 00:09:14.184 Total : 46904.44 183.22 2046.55 717.68 7438.77 00:09:14.184 00:09:14.442 Initializing NVMe Controllers 00:09:14.442 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.442 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.442 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.442 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.442 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:14.442 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:14.442 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:14.442 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:14.442 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:14.442 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:14.442 Initialization complete. Launching workers. 00:09:14.442 ======================================================== 00:09:14.442 Latency(us) 00:09:14.442 Device Information : IOPS MiB/s Average min max 00:09:14.442 PCIE (0000:00:10.0) NSID 1 from core 2: 3257.69 12.73 4909.29 1006.38 12301.08 00:09:14.442 PCIE (0000:00:11.0) NSID 1 from core 2: 3257.69 12.73 4911.05 1013.27 12388.87 00:09:14.442 PCIE (0000:00:13.0) NSID 1 from core 2: 3257.69 12.73 4917.57 1103.07 12580.33 00:09:14.442 PCIE (0000:00:12.0) NSID 1 from core 2: 3257.69 12.73 4917.50 1171.41 12337.31 00:09:14.442 PCIE (0000:00:12.0) NSID 2 from core 2: 3257.69 12.73 4917.43 1161.70 12406.64 00:09:14.442 PCIE (0000:00:12.0) NSID 3 from core 2: 3257.69 12.73 4916.94 1044.32 12356.05 00:09:14.442 ======================================================== 00:09:14.442 Total : 19546.15 76.35 4914.96 1006.38 12580.33 00:09:14.442 00:09:14.442 22:05:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77359 00:09:16.341 Initializing NVMe Controllers 00:09:16.341 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:16.341 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:16.341 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:16.341 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:16.341 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:16.341 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:16.341 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:16.341 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:16.341 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:16.341 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:16.341 Initialization complete. Launching workers. 00:09:16.341 ======================================================== 00:09:16.341 Latency(us) 00:09:16.341 Device Information : IOPS MiB/s Average min max 00:09:16.341 PCIE (0000:00:10.0) NSID 1 from core 0: 10762.93 42.04 1485.33 692.36 5660.70 00:09:16.341 PCIE (0000:00:11.0) NSID 1 from core 0: 10762.93 42.04 1486.20 711.25 5385.74 00:09:16.341 PCIE (0000:00:13.0) NSID 1 from core 0: 10762.93 42.04 1486.18 629.19 6057.11 00:09:16.341 PCIE (0000:00:12.0) NSID 1 from core 0: 10762.93 42.04 1486.16 544.50 5621.66 00:09:16.341 PCIE (0000:00:12.0) NSID 2 from core 0: 10762.93 42.04 1486.14 467.28 5199.61 00:09:16.341 PCIE (0000:00:12.0) NSID 3 from core 0: 10762.93 42.04 1486.12 391.19 5404.50 00:09:16.341 ======================================================== 00:09:16.341 Total : 64577.56 252.26 1486.02 391.19 6057.11 00:09:16.341 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77360 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77429 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77430 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:16.341 22:05:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:19.625 Initializing NVMe Controllers 00:09:19.625 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.625 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.625 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.625 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.625 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:19.625 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:19.625 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:19.625 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:19.625 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:19.625 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:19.625 Initialization complete. Launching workers. 00:09:19.625 ======================================================== 00:09:19.625 Latency(us) 00:09:19.625 Device Information : IOPS MiB/s Average min max 00:09:19.625 PCIE (0000:00:10.0) NSID 1 from core 0: 7549.22 29.49 2117.97 725.18 6154.58 00:09:19.626 PCIE (0000:00:11.0) NSID 1 from core 0: 7549.22 29.49 2119.07 747.94 6062.20 00:09:19.626 PCIE (0000:00:13.0) NSID 1 from core 0: 7549.22 29.49 2119.06 747.80 6529.85 00:09:19.626 PCIE (0000:00:12.0) NSID 1 from core 0: 7549.22 29.49 2119.05 747.76 6462.55 00:09:19.626 PCIE (0000:00:12.0) NSID 2 from core 0: 7549.22 29.49 2119.04 736.44 6272.27 00:09:19.626 PCIE (0000:00:12.0) NSID 3 from core 0: 7549.22 29.49 2119.01 746.95 6448.33 00:09:19.626 ======================================================== 00:09:19.626 Total : 45295.32 176.93 2118.87 725.18 6529.85 00:09:19.626 00:09:19.626 Initializing NVMe Controllers 00:09:19.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.626 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.626 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:19.626 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:19.626 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:19.626 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:19.626 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:19.626 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:19.626 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:19.626 Initialization complete. Launching workers. 00:09:19.626 ======================================================== 00:09:19.626 Latency(us) 00:09:19.626 Device Information : IOPS MiB/s Average min max 00:09:19.626 PCIE (0000:00:10.0) NSID 1 from core 1: 7593.61 29.66 2105.58 741.47 7331.07 00:09:19.626 PCIE (0000:00:11.0) NSID 1 from core 1: 7593.61 29.66 2106.53 674.60 7225.45 00:09:19.626 PCIE (0000:00:13.0) NSID 1 from core 1: 7593.61 29.66 2106.45 602.09 7220.28 00:09:19.626 PCIE (0000:00:12.0) NSID 1 from core 1: 7593.61 29.66 2106.37 539.78 7383.13 00:09:19.626 PCIE (0000:00:12.0) NSID 2 from core 1: 7593.61 29.66 2106.29 451.30 7149.13 00:09:19.626 PCIE (0000:00:12.0) NSID 3 from core 1: 7593.61 29.66 2106.22 401.46 7172.98 00:09:19.626 ======================================================== 00:09:19.626 Total : 45561.64 177.98 2106.24 401.46 7383.13 00:09:19.626 00:09:21.529 Initializing NVMe Controllers 00:09:21.529 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.529 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.529 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.529 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.529 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:21.529 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:21.529 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:21.529 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:21.529 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:21.529 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:21.529 Initialization complete. Launching workers. 00:09:21.529 ======================================================== 00:09:21.529 Latency(us) 00:09:21.529 Device Information : IOPS MiB/s Average min max 00:09:21.529 PCIE (0000:00:10.0) NSID 1 from core 2: 4600.11 17.97 3476.50 724.82 14967.39 00:09:21.529 PCIE (0000:00:11.0) NSID 1 from core 2: 4600.11 17.97 3477.70 705.54 15127.56 00:09:21.529 PCIE (0000:00:13.0) NSID 1 from core 2: 4600.11 17.97 3478.22 750.92 14750.53 00:09:21.529 PCIE (0000:00:12.0) NSID 1 from core 2: 4600.11 17.97 3480.06 752.41 16500.53 00:09:21.529 PCIE (0000:00:12.0) NSID 2 from core 2: 4600.11 17.97 3479.69 750.50 12591.47 00:09:21.529 PCIE (0000:00:12.0) NSID 3 from core 2: 4600.11 17.97 3479.79 755.12 12330.80 00:09:21.529 ======================================================== 00:09:21.529 Total : 27600.67 107.82 3478.66 705.54 16500.53 00:09:21.529 00:09:21.529 ************************************ 00:09:21.529 END TEST nvme_multi_secondary 00:09:21.529 ************************************ 00:09:21.529 22:05:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77429 00:09:21.529 22:05:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77430 00:09:21.529 00:09:21.529 real 0m10.516s 00:09:21.529 user 0m18.319s 00:09:21.529 sys 0m0.570s 00:09:21.529 22:05:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.529 22:05:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:21.529 22:05:27 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:21.529 22:05:27 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76379 ]] 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1094 -- # kill 76379 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1095 -- # wait 76379 00:09:21.529 [2024-12-16 22:05:27.747008] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747104] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747130] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747882] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747901] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.747919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.748585] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.748648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.748667] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.748716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.749534] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.749591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.749609] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 [2024-12-16 22:05:27.749628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77302) is not found. Dropping the request. 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:21.529 22:05:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.529 22:05:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.529 ************************************ 00:09:21.529 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:21.529 ************************************ 00:09:21.529 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:21.791 * Looking for test storage... 00:09:21.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.791 --rc genhtml_branch_coverage=1 00:09:21.791 --rc genhtml_function_coverage=1 00:09:21.791 --rc genhtml_legend=1 00:09:21.791 --rc geninfo_all_blocks=1 00:09:21.791 --rc geninfo_unexecuted_blocks=1 00:09:21.791 00:09:21.791 ' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.791 --rc genhtml_branch_coverage=1 00:09:21.791 --rc genhtml_function_coverage=1 00:09:21.791 --rc genhtml_legend=1 00:09:21.791 --rc geninfo_all_blocks=1 00:09:21.791 --rc geninfo_unexecuted_blocks=1 00:09:21.791 00:09:21.791 ' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.791 --rc genhtml_branch_coverage=1 00:09:21.791 --rc genhtml_function_coverage=1 00:09:21.791 --rc genhtml_legend=1 00:09:21.791 --rc geninfo_all_blocks=1 00:09:21.791 --rc geninfo_unexecuted_blocks=1 00:09:21.791 00:09:21.791 ' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.791 --rc genhtml_branch_coverage=1 00:09:21.791 --rc genhtml_function_coverage=1 00:09:21.791 --rc genhtml_legend=1 00:09:21.791 --rc geninfo_all_blocks=1 00:09:21.791 --rc geninfo_unexecuted_blocks=1 00:09:21.791 00:09:21.791 ' 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:21.791 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.792 22:05:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:21.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77591 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77591 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77591 ']' 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.792 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:21.792 [2024-12-16 22:05:28.121156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:21.792 [2024-12-16 22:05:28.121529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77591 ] 00:09:22.053 [2024-12-16 22:05:28.295894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.053 [2024-12-16 22:05:28.328772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.053 [2024-12-16 22:05:28.329010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.053 [2024-12-16 22:05:28.329161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.053 [2024-12-16 22:05:28.329213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.622 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.622 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:22.622 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:22.622 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.622 22:05:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.882 nvme0n1 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_rnXY7.txt 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.882 true 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734386729 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77614 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:22.882 22:05:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.795 [2024-12-16 22:05:31.047831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:24.795 [2024-12-16 22:05:31.048076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:24.795 [2024-12-16 22:05:31.048097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:24.795 [2024-12-16 22:05:31.048113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:24.795 [2024-12-16 22:05:31.049736] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:24.795 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77614 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77614 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77614 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_rnXY7.txt 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:24.795 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_rnXY7.txt 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77591 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77591 ']' 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77591 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:24.796 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77591 00:09:25.057 killing process with pid 77591 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77591' 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77591 00:09:25.057 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77591 00:09:25.318 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:25.318 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:25.318 ************************************ 00:09:25.318 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:25.318 ************************************ 00:09:25.318 00:09:25.318 real 0m3.599s 00:09:25.318 user 0m12.660s 00:09:25.318 sys 0m0.536s 00:09:25.318 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.318 22:05:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.318 22:05:31 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:25.318 22:05:31 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:25.318 22:05:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.318 22:05:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.318 22:05:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.318 ************************************ 00:09:25.318 START TEST nvme_fio 00:09:25.318 ************************************ 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:25.318 22:05:31 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:25.318 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:25.579 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:25.579 22:05:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:25.841 22:05:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:25.841 22:05:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:25.841 22:05:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.102 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.102 fio-3.35 00:09:26.102 Starting 1 thread 00:09:31.391 00:09:31.391 test: (groupid=0, jobs=1): err= 0: pid=77737: Mon Dec 16 22:05:36 2024 00:09:31.391 read: IOPS=19.7k, BW=76.8MiB/s (80.5MB/s)(154MiB/2001msec) 00:09:31.391 slat (usec): min=3, max=118, avg= 5.45, stdev= 2.67 00:09:31.391 clat (usec): min=693, max=8984, avg=3232.15, stdev=1109.91 00:09:31.391 lat (usec): min=699, max=8989, avg=3237.60, stdev=1111.09 00:09:31.391 clat percentiles (usec): 00:09:31.391 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:31.391 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:31.391 | 70.00th=[ 3163], 80.00th=[ 3720], 90.00th=[ 5080], 95.00th=[ 5866], 00:09:31.391 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 7963], 99.95th=[ 8225], 00:09:31.391 | 99.99th=[ 8455] 00:09:31.391 bw ( KiB/s): min=69488, max=82688, per=98.14%, avg=77192.00, stdev=6871.42, samples=3 00:09:31.391 iops : min=17372, max=20672, avg=19298.00, stdev=1717.86, samples=3 00:09:31.391 write: IOPS=19.6k, BW=76.7MiB/s (80.4MB/s)(153MiB/2001msec); 0 zone resets 00:09:31.391 slat (nsec): min=3855, max=90800, avg=5628.99, stdev=2669.01 00:09:31.391 clat (usec): min=704, max=8570, avg=3259.43, stdev=1115.57 00:09:31.391 lat (usec): min=710, max=8576, avg=3265.06, stdev=1116.74 00:09:31.391 clat percentiles (usec): 00:09:31.391 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:31.391 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2999], 00:09:31.391 | 70.00th=[ 3195], 80.00th=[ 3785], 90.00th=[ 5080], 95.00th=[ 5866], 00:09:31.391 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8160], 00:09:31.391 | 99.99th=[ 8455] 00:09:31.391 bw ( KiB/s): min=69680, max=82944, per=98.42%, avg=77306.67, stdev=6852.12, samples=3 00:09:31.392 iops : min=17420, max=20736, avg=19326.67, stdev=1713.03, samples=3 00:09:31.392 lat (usec) : 750=0.01%, 1000=0.02% 00:09:31.392 lat (msec) : 2=1.21%, 4=80.87%, 10=17.90% 00:09:31.392 cpu : usr=99.00%, sys=0.10%, ctx=2, majf=0, minf=626 00:09:31.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:31.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.392 issued rwts: total=39349,39294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.392 00:09:31.392 Run status group 0 (all jobs): 00:09:31.392 READ: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:31.392 WRITE: bw=76.7MiB/s (80.4MB/s), 76.7MiB/s-76.7MiB/s (80.4MB/s-80.4MB/s), io=153MiB (161MB), run=2001-2001msec 00:09:31.392 ----------------------------------------------------- 00:09:31.392 Suppressions used: 00:09:31.392 count bytes template 00:09:31.392 1 32 /usr/src/fio/parse.c 00:09:31.392 1 8 libtcmalloc_minimal.so 00:09:31.392 ----------------------------------------------------- 00:09:31.392 00:09:31.392 22:05:36 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:31.392 22:05:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:31.392 22:05:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:31.392 22:05:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:31.392 22:05:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:31.392 22:05:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:31.392 22:05:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:31.392 22:05:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:31.392 22:05:37 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:31.392 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:31.392 fio-3.35 00:09:31.392 Starting 1 thread 00:09:35.599 00:09:35.599 test: (groupid=0, jobs=1): err= 0: pid=77793: Mon Dec 16 22:05:41 2024 00:09:35.599 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(118MiB/2001msec) 00:09:35.599 slat (nsec): min=4847, max=84823, avg=6885.77, stdev=4012.23 00:09:35.599 clat (usec): min=679, max=12510, avg=4192.98, stdev=1472.98 00:09:35.599 lat (usec): min=684, max=12523, avg=4199.86, stdev=1474.38 00:09:35.599 clat percentiles (usec): 00:09:35.599 | 1.00th=[ 2212], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 3032], 00:09:35.599 | 30.00th=[ 3195], 40.00th=[ 3392], 50.00th=[ 3654], 60.00th=[ 4080], 00:09:35.599 | 70.00th=[ 4752], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7111], 00:09:35.599 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[10028], 99.95th=[10945], 00:09:35.599 | 99.99th=[11863] 00:09:35.599 bw ( KiB/s): min=57800, max=64136, per=99.47%, avg=60306.67, stdev=3368.73, samples=3 00:09:35.599 iops : min=14450, max=16034, avg=15076.67, stdev=842.18, samples=3 00:09:35.599 write: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:09:35.599 slat (nsec): min=4969, max=84210, avg=7055.87, stdev=4017.26 00:09:35.599 clat (usec): min=670, max=12379, avg=4218.57, stdev=1468.28 00:09:35.599 lat (usec): min=675, max=12385, avg=4225.62, stdev=1469.67 00:09:35.599 clat percentiles (usec): 00:09:35.599 | 1.00th=[ 2180], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 3032], 00:09:35.599 | 30.00th=[ 3228], 40.00th=[ 3425], 50.00th=[ 3687], 60.00th=[ 4146], 00:09:35.599 | 70.00th=[ 4817], 80.00th=[ 5407], 90.00th=[ 6325], 95.00th=[ 7111], 00:09:35.599 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[10290], 99.95th=[10814], 00:09:35.599 | 99.99th=[11731] 00:09:35.599 bw ( KiB/s): min=56880, max=63656, per=98.73%, avg=59973.33, stdev=3426.23, samples=3 00:09:35.599 iops : min=14220, max=15914, avg=14993.33, stdev=856.56, samples=3 00:09:35.599 lat (usec) : 750=0.01%, 1000=0.02% 00:09:35.599 lat (msec) : 2=0.63%, 4=57.43%, 10=41.78%, 20=0.12% 00:09:35.599 cpu : usr=98.45%, sys=0.20%, ctx=4, majf=0, minf=627 00:09:35.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:35.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.599 issued rwts: total=30330,30386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.599 00:09:35.599 Run status group 0 (all jobs): 00:09:35.599 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=118MiB (124MB), run=2001-2001msec 00:09:35.599 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=119MiB (124MB), run=2001-2001msec 00:09:35.599 ----------------------------------------------------- 00:09:35.599 Suppressions used: 00:09:35.599 count bytes template 00:09:35.599 1 32 /usr/src/fio/parse.c 00:09:35.599 1 8 libtcmalloc_minimal.so 00:09:35.599 ----------------------------------------------------- 00:09:35.599 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:35.599 22:05:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:35.860 22:05:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:35.860 22:05:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:35.860 22:05:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:36.121 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:36.121 fio-3.35 00:09:36.121 Starting 1 thread 00:09:41.456 00:09:41.457 test: (groupid=0, jobs=1): err= 0: pid=77853: Mon Dec 16 22:05:47 2024 00:09:41.457 read: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(113MiB/2001msec) 00:09:41.457 slat (nsec): min=4866, max=77593, avg=7096.15, stdev=4458.28 00:09:41.457 clat (usec): min=1156, max=13176, avg=4402.41, stdev=1551.04 00:09:41.457 lat (usec): min=1161, max=13221, avg=4409.51, stdev=1552.73 00:09:41.457 clat percentiles (usec): 00:09:41.457 | 1.00th=[ 2376], 5.00th=[ 2737], 10.00th=[ 2900], 20.00th=[ 3097], 00:09:41.457 | 30.00th=[ 3294], 40.00th=[ 3490], 50.00th=[ 3851], 60.00th=[ 4490], 00:09:41.457 | 70.00th=[ 5080], 80.00th=[ 5735], 90.00th=[ 6587], 95.00th=[ 7308], 00:09:41.457 | 99.00th=[ 8979], 99.50th=[ 9765], 99.90th=[11076], 99.95th=[11600], 00:09:41.457 | 99.99th=[13173] 00:09:41.457 bw ( KiB/s): min=54856, max=57032, per=96.87%, avg=56109.33, stdev=1125.06, samples=3 00:09:41.457 iops : min=13714, max=14258, avg=14027.33, stdev=281.26, samples=3 00:09:41.457 write: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec); 0 zone resets 00:09:41.457 slat (usec): min=4, max=100, avg= 7.23, stdev= 4.30 00:09:41.457 clat (usec): min=1176, max=13103, avg=4405.36, stdev=1535.11 00:09:41.457 lat (usec): min=1182, max=13119, avg=4412.59, stdev=1536.69 00:09:41.457 clat percentiles (usec): 00:09:41.457 | 1.00th=[ 2376], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3130], 00:09:41.457 | 30.00th=[ 3294], 40.00th=[ 3523], 50.00th=[ 3851], 60.00th=[ 4490], 00:09:41.457 | 70.00th=[ 5080], 80.00th=[ 5735], 90.00th=[ 6587], 95.00th=[ 7308], 00:09:41.457 | 99.00th=[ 8979], 99.50th=[ 9765], 99.90th=[11076], 99.95th=[12125], 00:09:41.457 | 99.99th=[13042] 00:09:41.457 bw ( KiB/s): min=55232, max=56960, per=96.73%, avg=56112.00, stdev=864.44, samples=3 00:09:41.457 iops : min=13808, max=14240, avg=14028.00, stdev=216.11, samples=3 00:09:41.457 lat (msec) : 2=0.46%, 4=52.48%, 10=46.67%, 20=0.39% 00:09:41.457 cpu : usr=98.55%, sys=0.00%, ctx=5, majf=0, minf=627 00:09:41.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:41.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:41.457 issued rwts: total=28976,29020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:41.457 00:09:41.457 Run status group 0 (all jobs): 00:09:41.457 READ: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=113MiB (119MB), run=2001-2001msec 00:09:41.457 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:09:41.457 ----------------------------------------------------- 00:09:41.457 Suppressions used: 00:09:41.457 count bytes template 00:09:41.457 1 32 /usr/src/fio/parse.c 00:09:41.457 1 8 libtcmalloc_minimal.so 00:09:41.457 ----------------------------------------------------- 00:09:41.457 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:41.457 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:41.718 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:41.718 22:05:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:41.718 22:05:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:41.718 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:41.718 fio-3.35 00:09:41.718 Starting 1 thread 00:09:47.011 00:09:47.011 test: (groupid=0, jobs=1): err= 0: pid=77920: Mon Dec 16 22:05:52 2024 00:09:47.011 read: IOPS=15.0k, BW=58.4MiB/s (61.3MB/s)(117MiB/2001msec) 00:09:47.011 slat (usec): min=4, max=102, avg= 6.76, stdev= 4.03 00:09:47.011 clat (usec): min=313, max=11985, avg=4253.03, stdev=1466.67 00:09:47.011 lat (usec): min=318, max=12029, avg=4259.80, stdev=1467.93 00:09:47.011 clat percentiles (usec): 00:09:47.011 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 2966], 00:09:47.011 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3752], 60.00th=[ 4359], 00:09:47.011 | 70.00th=[ 5014], 80.00th=[ 5604], 90.00th=[ 6325], 95.00th=[ 6915], 00:09:47.011 | 99.00th=[ 8225], 99.50th=[ 8848], 99.90th=[11076], 99.95th=[11469], 00:09:47.011 | 99.99th=[11994] 00:09:47.011 bw ( KiB/s): min=54520, max=60456, per=96.68%, avg=57845.33, stdev=3031.85, samples=3 00:09:47.011 iops : min=13630, max=15114, avg=14461.33, stdev=757.96, samples=3 00:09:47.011 write: IOPS=15.0k, BW=58.4MiB/s (61.3MB/s)(117MiB/2001msec); 0 zone resets 00:09:47.011 slat (nsec): min=4935, max=87541, avg=6955.25, stdev=4108.15 00:09:47.011 clat (usec): min=322, max=11895, avg=4276.05, stdev=1480.39 00:09:47.011 lat (usec): min=327, max=11913, avg=4283.00, stdev=1481.71 00:09:47.011 clat percentiles (usec): 00:09:47.011 | 1.00th=[ 2245], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2999], 00:09:47.011 | 30.00th=[ 3163], 40.00th=[ 3425], 50.00th=[ 3785], 60.00th=[ 4424], 00:09:47.011 | 70.00th=[ 5014], 80.00th=[ 5604], 90.00th=[ 6325], 95.00th=[ 6980], 00:09:47.011 | 99.00th=[ 8455], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[11338], 00:09:47.011 | 99.99th=[11731] 00:09:47.012 bw ( KiB/s): min=54296, max=60016, per=96.42%, avg=57704.00, stdev=3013.39, samples=3 00:09:47.012 iops : min=13574, max=15004, avg=14426.00, stdev=753.35, samples=3 00:09:47.012 lat (usec) : 500=0.03%, 750=0.02%, 1000=0.02% 00:09:47.012 lat (msec) : 2=0.50%, 4=53.81%, 10=45.38%, 20=0.26% 00:09:47.012 cpu : usr=97.60%, sys=0.50%, ctx=14, majf=0, minf=625 00:09:47.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:47.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.012 issued rwts: total=29932,29937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.012 00:09:47.012 Run status group 0 (all jobs): 00:09:47.012 READ: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:09:47.012 WRITE: bw=58.4MiB/s (61.3MB/s), 58.4MiB/s-58.4MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:09:47.012 ----------------------------------------------------- 00:09:47.012 Suppressions used: 00:09:47.012 count bytes template 00:09:47.012 1 32 /usr/src/fio/parse.c 00:09:47.012 1 8 libtcmalloc_minimal.so 00:09:47.012 ----------------------------------------------------- 00:09:47.012 00:09:47.012 22:05:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:47.012 ************************************ 00:09:47.012 END TEST nvme_fio 00:09:47.012 ************************************ 00:09:47.012 22:05:52 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:47.012 00:09:47.012 real 0m21.134s 00:09:47.012 user 0m15.157s 00:09:47.012 sys 0m8.825s 00:09:47.012 22:05:52 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.012 22:05:52 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:47.012 00:09:47.012 real 1m29.616s 00:09:47.012 user 3m31.799s 00:09:47.012 sys 0m19.219s 00:09:47.012 ************************************ 00:09:47.012 END TEST nvme 00:09:47.012 ************************************ 00:09:47.012 22:05:52 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.012 22:05:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.012 22:05:52 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:47.012 22:05:52 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:47.012 22:05:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.012 22:05:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.012 22:05:52 -- common/autotest_common.sh@10 -- # set +x 00:09:47.012 ************************************ 00:09:47.012 START TEST nvme_scc 00:09:47.012 ************************************ 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:47.012 * Looking for test storage... 00:09:47.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.012 --rc genhtml_branch_coverage=1 00:09:47.012 --rc genhtml_function_coverage=1 00:09:47.012 --rc genhtml_legend=1 00:09:47.012 --rc geninfo_all_blocks=1 00:09:47.012 --rc geninfo_unexecuted_blocks=1 00:09:47.012 00:09:47.012 ' 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.012 --rc genhtml_branch_coverage=1 00:09:47.012 --rc genhtml_function_coverage=1 00:09:47.012 --rc genhtml_legend=1 00:09:47.012 --rc geninfo_all_blocks=1 00:09:47.012 --rc geninfo_unexecuted_blocks=1 00:09:47.012 00:09:47.012 ' 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.012 --rc genhtml_branch_coverage=1 00:09:47.012 --rc genhtml_function_coverage=1 00:09:47.012 --rc genhtml_legend=1 00:09:47.012 --rc geninfo_all_blocks=1 00:09:47.012 --rc geninfo_unexecuted_blocks=1 00:09:47.012 00:09:47.012 ' 00:09:47.012 22:05:52 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.012 --rc genhtml_branch_coverage=1 00:09:47.012 --rc genhtml_function_coverage=1 00:09:47.012 --rc genhtml_legend=1 00:09:47.012 --rc geninfo_all_blocks=1 00:09:47.012 --rc geninfo_unexecuted_blocks=1 00:09:47.012 00:09:47.012 ' 00:09:47.012 22:05:52 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.012 22:05:52 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.012 22:05:52 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.012 22:05:52 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.012 22:05:52 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.012 22:05:52 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:47.012 22:05:52 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:47.012 22:05:52 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:47.012 22:05:52 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.012 22:05:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:47.012 22:05:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:47.012 22:05:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:47.012 22:05:52 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.273 Waiting for block devices as requested 00:09:47.273 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.273 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.534 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.846 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.846 22:05:58 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:52.846 22:05:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:52.846 22:05:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:52.846 22:05:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.846 22:05:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:52.846 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.847 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:52.848 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.849 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.850 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.851 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:52.852 22:05:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:52.852 22:05:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:52.852 22:05:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.852 22:05:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:52.852 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:52.853 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:52.854 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.855 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:52.856 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:52.857 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.858 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:52.859 22:05:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:52.859 22:05:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:52.859 22:05:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.859 22:05:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:52.859 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.860 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:52.861 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:52.862 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:52.863 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:52.864 22:05:58 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:52.864 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:52.865 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.866 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.867 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.868 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.869 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:52.870 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:52.871 22:05:59 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:52.871 22:05:59 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:52.871 22:05:59 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.871 22:05:59 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.871 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.872 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:52.873 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:52.874 22:05:59 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:52.874 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:52.875 22:05:59 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:52.875 22:05:59 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:52.875 22:05:59 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.020 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.020 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.020 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.020 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.020 22:06:00 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:54.020 22:06:00 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:54.020 22:06:00 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.020 22:06:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:54.020 ************************************ 00:09:54.020 START TEST nvme_simple_copy 00:09:54.020 ************************************ 00:09:54.020 22:06:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:54.282 Initializing NVMe Controllers 00:09:54.282 Attaching to 0000:00:10.0 00:09:54.282 Controller supports SCC. Attached to 0000:00:10.0 00:09:54.282 Namespace ID: 1 size: 6GB 00:09:54.282 Initialization complete. 00:09:54.282 00:09:54.282 Controller QEMU NVMe Ctrl (12340 ) 00:09:54.282 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:54.282 Namespace Block Size:4096 00:09:54.282 Writing LBAs 0 to 63 with Random Data 00:09:54.282 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:54.282 LBAs matching Written Data: 64 00:09:54.282 00:09:54.282 real 0m0.282s 00:09:54.282 user 0m0.105s 00:09:54.282 sys 0m0.074s 00:09:54.282 ************************************ 00:09:54.282 22:06:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.282 22:06:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:54.282 END TEST nvme_simple_copy 00:09:54.282 ************************************ 00:09:54.543 00:09:54.543 real 0m7.920s 00:09:54.543 user 0m1.143s 00:09:54.543 sys 0m1.502s 00:09:54.543 22:06:00 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.543 ************************************ 00:09:54.543 END TEST nvme_scc 00:09:54.543 ************************************ 00:09:54.543 22:06:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:54.543 22:06:00 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:54.543 22:06:00 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:54.543 22:06:00 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:54.543 22:06:00 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:54.543 22:06:00 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:54.543 22:06:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.543 22:06:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.543 22:06:00 -- common/autotest_common.sh@10 -- # set +x 00:09:54.543 ************************************ 00:09:54.543 START TEST nvme_fdp 00:09:54.543 ************************************ 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:54.543 * Looking for test storage... 00:09:54.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.543 --rc genhtml_branch_coverage=1 00:09:54.543 --rc genhtml_function_coverage=1 00:09:54.543 --rc genhtml_legend=1 00:09:54.543 --rc geninfo_all_blocks=1 00:09:54.543 --rc geninfo_unexecuted_blocks=1 00:09:54.543 00:09:54.543 ' 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.543 --rc genhtml_branch_coverage=1 00:09:54.543 --rc genhtml_function_coverage=1 00:09:54.543 --rc genhtml_legend=1 00:09:54.543 --rc geninfo_all_blocks=1 00:09:54.543 --rc geninfo_unexecuted_blocks=1 00:09:54.543 00:09:54.543 ' 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.543 --rc genhtml_branch_coverage=1 00:09:54.543 --rc genhtml_function_coverage=1 00:09:54.543 --rc genhtml_legend=1 00:09:54.543 --rc geninfo_all_blocks=1 00:09:54.543 --rc geninfo_unexecuted_blocks=1 00:09:54.543 00:09:54.543 ' 00:09:54.543 22:06:00 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.543 --rc genhtml_branch_coverage=1 00:09:54.543 --rc genhtml_function_coverage=1 00:09:54.543 --rc genhtml_legend=1 00:09:54.543 --rc geninfo_all_blocks=1 00:09:54.543 --rc geninfo_unexecuted_blocks=1 00:09:54.543 00:09:54.543 ' 00:09:54.543 22:06:00 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:54.543 22:06:00 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:54.543 22:06:00 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:54.543 22:06:00 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:54.543 22:06:00 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.543 22:06:00 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.804 22:06:00 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.804 22:06:00 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.804 22:06:00 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.804 22:06:00 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.804 22:06:00 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:54.804 22:06:00 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:54.804 22:06:00 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:54.804 22:06:00 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.804 22:06:00 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:55.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.065 Waiting for block devices as requested 00:09:55.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.324 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.680 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:00.680 22:06:06 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:00.680 22:06:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.680 22:06:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:00.680 22:06:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.680 22:06:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.680 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.681 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.682 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:00.683 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.684 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.685 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.686 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:00.687 22:06:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.687 22:06:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:00.687 22:06:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.687 22:06:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.687 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:00.688 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.689 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:00.690 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.691 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.692 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:00.693 22:06:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.693 22:06:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:00.693 22:06:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.693 22:06:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.693 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.694 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.695 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:00.696 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.697 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.698 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.699 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:00.700 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:00.701 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.702 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.703 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.704 22:06:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.704 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.705 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.706 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:00.969 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:00.970 22:06:07 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.970 22:06:07 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:00.970 22:06:07 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.970 22:06:07 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.970 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:00.971 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.972 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:00.973 22:06:07 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:00.973 22:06:07 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:00.973 22:06:07 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:01.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.808 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.070 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.070 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.070 22:06:08 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:02.070 22:06:08 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:02.070 22:06:08 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.070 22:06:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:02.070 ************************************ 00:10:02.070 START TEST nvme_flexible_data_placement 00:10:02.070 ************************************ 00:10:02.070 22:06:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:02.331 Initializing NVMe Controllers 00:10:02.331 Attaching to 0000:00:13.0 00:10:02.331 Controller supports FDP Attached to 0000:00:13.0 00:10:02.331 Namespace ID: 1 Endurance Group ID: 1 00:10:02.331 Initialization complete. 00:10:02.331 00:10:02.331 ================================== 00:10:02.331 == FDP tests for Namespace: #01 == 00:10:02.331 ================================== 00:10:02.331 00:10:02.331 Get Feature: FDP: 00:10:02.331 ================= 00:10:02.331 Enabled: Yes 00:10:02.331 FDP configuration Index: 0 00:10:02.331 00:10:02.331 FDP configurations log page 00:10:02.331 =========================== 00:10:02.331 Number of FDP configurations: 1 00:10:02.331 Version: 0 00:10:02.331 Size: 112 00:10:02.331 FDP Configuration Descriptor: 0 00:10:02.331 Descriptor Size: 96 00:10:02.331 Reclaim Group Identifier format: 2 00:10:02.331 FDP Volatile Write Cache: Not Present 00:10:02.331 FDP Configuration: Valid 00:10:02.331 Vendor Specific Size: 0 00:10:02.331 Number of Reclaim Groups: 2 00:10:02.331 Number of Recalim Unit Handles: 8 00:10:02.331 Max Placement Identifiers: 128 00:10:02.331 Number of Namespaces Suppprted: 256 00:10:02.331 Reclaim unit Nominal Size: 6000000 bytes 00:10:02.331 Estimated Reclaim Unit Time Limit: Not Reported 00:10:02.331 RUH Desc #000: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #001: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #002: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #003: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #004: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #005: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #006: RUH Type: Initially Isolated 00:10:02.331 RUH Desc #007: RUH Type: Initially Isolated 00:10:02.331 00:10:02.331 FDP reclaim unit handle usage log page 00:10:02.331 ====================================== 00:10:02.331 Number of Reclaim Unit Handles: 8 00:10:02.331 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:02.331 RUH Usage Desc #001: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #002: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #003: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #004: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #005: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #006: RUH Attributes: Unused 00:10:02.331 RUH Usage Desc #007: RUH Attributes: Unused 00:10:02.331 00:10:02.331 FDP statistics log page 00:10:02.331 ======================= 00:10:02.331 Host bytes with metadata written: 2094120960 00:10:02.331 Media bytes with metadata written: 2095255552 00:10:02.331 Media bytes erased: 0 00:10:02.331 00:10:02.331 FDP Reclaim unit handle status 00:10:02.331 ============================== 00:10:02.331 Number of RUHS descriptors: 2 00:10:02.331 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000012e4 00:10:02.331 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:02.331 00:10:02.331 FDP write on placement id: 0 success 00:10:02.331 00:10:02.331 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:02.331 00:10:02.331 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:02.331 00:10:02.331 Get Feature: FDP Events for Placement handle: #0 00:10:02.331 ======================== 00:10:02.331 Number of FDP Events: 6 00:10:02.331 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:02.331 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:02.331 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:02.331 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:02.331 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:02.331 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:02.331 00:10:02.331 FDP events log page 00:10:02.331 =================== 00:10:02.331 Number of FDP events: 1 00:10:02.331 FDP Event #0: 00:10:02.331 Event Type: RU Not Written to Capacity 00:10:02.331 Placement Identifier: Valid 00:10:02.331 NSID: Valid 00:10:02.331 Location: Valid 00:10:02.331 Placement Identifier: 0 00:10:02.331 Event Timestamp: 5 00:10:02.331 Namespace Identifier: 1 00:10:02.331 Reclaim Group Identifier: 0 00:10:02.331 Reclaim Unit Handle Identifier: 0 00:10:02.331 00:10:02.331 FDP test passed 00:10:02.331 00:10:02.331 real 0m0.241s 00:10:02.331 user 0m0.073s 00:10:02.331 sys 0m0.065s 00:10:02.331 22:06:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.331 ************************************ 00:10:02.331 22:06:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:02.331 END TEST nvme_flexible_data_placement 00:10:02.331 ************************************ 00:10:02.331 00:10:02.331 real 0m7.874s 00:10:02.331 user 0m1.115s 00:10:02.331 sys 0m1.470s 00:10:02.331 ************************************ 00:10:02.331 END TEST nvme_fdp 00:10:02.331 ************************************ 00:10:02.331 22:06:08 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.331 22:06:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:02.332 22:06:08 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:02.332 22:06:08 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:02.332 22:06:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.332 22:06:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.332 22:06:08 -- common/autotest_common.sh@10 -- # set +x 00:10:02.332 ************************************ 00:10:02.332 START TEST nvme_rpc 00:10:02.332 ************************************ 00:10:02.332 22:06:08 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:02.593 * Looking for test storage... 00:10:02.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.593 22:06:08 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.593 --rc genhtml_branch_coverage=1 00:10:02.593 --rc genhtml_function_coverage=1 00:10:02.593 --rc genhtml_legend=1 00:10:02.593 --rc geninfo_all_blocks=1 00:10:02.593 --rc geninfo_unexecuted_blocks=1 00:10:02.593 00:10:02.593 ' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.593 --rc genhtml_branch_coverage=1 00:10:02.593 --rc genhtml_function_coverage=1 00:10:02.593 --rc genhtml_legend=1 00:10:02.593 --rc geninfo_all_blocks=1 00:10:02.593 --rc geninfo_unexecuted_blocks=1 00:10:02.593 00:10:02.593 ' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.593 --rc genhtml_branch_coverage=1 00:10:02.593 --rc genhtml_function_coverage=1 00:10:02.593 --rc genhtml_legend=1 00:10:02.593 --rc geninfo_all_blocks=1 00:10:02.593 --rc geninfo_unexecuted_blocks=1 00:10:02.593 00:10:02.593 ' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.593 --rc genhtml_branch_coverage=1 00:10:02.593 --rc genhtml_function_coverage=1 00:10:02.593 --rc genhtml_legend=1 00:10:02.593 --rc geninfo_all_blocks=1 00:10:02.593 --rc geninfo_unexecuted_blocks=1 00:10:02.593 00:10:02.593 ' 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79303 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:02.593 22:06:08 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79303 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79303 ']' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.593 22:06:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.854 [2024-12-16 22:06:08.964218] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:02.854 [2024-12-16 22:06:08.964371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79303 ] 00:10:02.854 [2024-12-16 22:06:09.127112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.854 [2024-12-16 22:06:09.157689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.854 [2024-12-16 22:06:09.157787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.793 22:06:09 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.793 22:06:09 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:03.793 22:06:09 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:03.793 Nvme0n1 00:10:03.793 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:03.793 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:04.054 request: 00:10:04.054 { 00:10:04.054 "bdev_name": "Nvme0n1", 00:10:04.054 "filename": "non_existing_file", 00:10:04.054 "method": "bdev_nvme_apply_firmware", 00:10:04.054 "req_id": 1 00:10:04.054 } 00:10:04.054 Got JSON-RPC error response 00:10:04.054 response: 00:10:04.054 { 00:10:04.054 "code": -32603, 00:10:04.054 "message": "open file failed." 00:10:04.054 } 00:10:04.054 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:04.054 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:04.054 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:04.313 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:04.313 22:06:10 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79303 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79303 ']' 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79303 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79303 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.313 killing process with pid 79303 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79303' 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79303 00:10:04.313 22:06:10 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79303 00:10:04.571 00:10:04.571 real 0m2.167s 00:10:04.571 user 0m4.174s 00:10:04.571 sys 0m0.557s 00:10:04.571 ************************************ 00:10:04.571 END TEST nvme_rpc 00:10:04.571 ************************************ 00:10:04.571 22:06:10 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.571 22:06:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 22:06:10 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:04.571 22:06:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.571 22:06:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.571 22:06:10 -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 ************************************ 00:10:04.571 START TEST nvme_rpc_timeouts 00:10:04.571 ************************************ 00:10:04.571 22:06:10 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:04.830 * Looking for test storage... 00:10:04.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:04.830 22:06:10 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.830 22:06:10 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.830 22:06:10 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.830 22:06:11 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.830 --rc genhtml_branch_coverage=1 00:10:04.830 --rc genhtml_function_coverage=1 00:10:04.830 --rc genhtml_legend=1 00:10:04.830 --rc geninfo_all_blocks=1 00:10:04.830 --rc geninfo_unexecuted_blocks=1 00:10:04.830 00:10:04.830 ' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.830 --rc genhtml_branch_coverage=1 00:10:04.830 --rc genhtml_function_coverage=1 00:10:04.830 --rc genhtml_legend=1 00:10:04.830 --rc geninfo_all_blocks=1 00:10:04.830 --rc geninfo_unexecuted_blocks=1 00:10:04.830 00:10:04.830 ' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.830 --rc genhtml_branch_coverage=1 00:10:04.830 --rc genhtml_function_coverage=1 00:10:04.830 --rc genhtml_legend=1 00:10:04.830 --rc geninfo_all_blocks=1 00:10:04.830 --rc geninfo_unexecuted_blocks=1 00:10:04.830 00:10:04.830 ' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.830 --rc genhtml_branch_coverage=1 00:10:04.830 --rc genhtml_function_coverage=1 00:10:04.830 --rc genhtml_legend=1 00:10:04.830 --rc geninfo_all_blocks=1 00:10:04.830 --rc geninfo_unexecuted_blocks=1 00:10:04.830 00:10:04.830 ' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79357 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79357 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79389 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79389 00:10:04.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79389 ']' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.830 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:04.830 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:04.830 [2024-12-16 22:06:11.101294] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:04.830 [2024-12-16 22:06:11.101411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79389 ] 00:10:05.091 [2024-12-16 22:06:11.259348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.091 [2024-12-16 22:06:11.279522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.091 [2024-12-16 22:06:11.279594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.662 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.662 Checking default timeout settings: 00:10:05.662 22:06:11 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:05.662 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:05.662 22:06:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:06.236 Making settings changes with rpc: 00:10:06.236 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:06.236 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:06.236 Check default vs. modified settings: 00:10:06.236 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:06.236 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 Setting action_on_timeout is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 Setting timeout_us is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:06.809 Setting timeout_admin_us is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79357 /tmp/settings_modified_79357 00:10:06.809 22:06:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79389 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79389 ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79389 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79389 00:10:06.809 killing process with pid 79389 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79389' 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79389 00:10:06.809 22:06:12 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79389 00:10:07.071 RPC TIMEOUT SETTING TEST PASSED. 00:10:07.071 22:06:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:07.071 00:10:07.071 real 0m2.348s 00:10:07.071 user 0m4.760s 00:10:07.071 sys 0m0.501s 00:10:07.071 ************************************ 00:10:07.071 END TEST nvme_rpc_timeouts 00:10:07.071 ************************************ 00:10:07.071 22:06:13 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.071 22:06:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 22:06:13 -- spdk/autotest.sh@239 -- # uname -s 00:10:07.071 22:06:13 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:07.071 22:06:13 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:07.071 22:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.071 22:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.071 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:10:07.071 ************************************ 00:10:07.071 START TEST sw_hotplug 00:10:07.071 ************************************ 00:10:07.071 22:06:13 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:07.071 * Looking for test storage... 00:10:07.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:07.071 22:06:13 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.071 22:06:13 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.071 22:06:13 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.332 22:06:13 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.332 --rc genhtml_branch_coverage=1 00:10:07.332 --rc genhtml_function_coverage=1 00:10:07.332 --rc genhtml_legend=1 00:10:07.332 --rc geninfo_all_blocks=1 00:10:07.332 --rc geninfo_unexecuted_blocks=1 00:10:07.332 00:10:07.332 ' 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.332 --rc genhtml_branch_coverage=1 00:10:07.332 --rc genhtml_function_coverage=1 00:10:07.332 --rc genhtml_legend=1 00:10:07.332 --rc geninfo_all_blocks=1 00:10:07.332 --rc geninfo_unexecuted_blocks=1 00:10:07.332 00:10:07.332 ' 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.332 --rc genhtml_branch_coverage=1 00:10:07.332 --rc genhtml_function_coverage=1 00:10:07.332 --rc genhtml_legend=1 00:10:07.332 --rc geninfo_all_blocks=1 00:10:07.332 --rc geninfo_unexecuted_blocks=1 00:10:07.332 00:10:07.332 ' 00:10:07.332 22:06:13 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.332 --rc genhtml_branch_coverage=1 00:10:07.332 --rc genhtml_function_coverage=1 00:10:07.332 --rc genhtml_legend=1 00:10:07.332 --rc geninfo_all_blocks=1 00:10:07.332 --rc geninfo_unexecuted_blocks=1 00:10:07.332 00:10:07.332 ' 00:10:07.332 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:07.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.595 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.595 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.595 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.595 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:07.595 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:07.595 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:07.595 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:07.595 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.595 22:06:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:07.596 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:07.857 22:06:13 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:07.857 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:07.857 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:07.857 22:06:13 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:08.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:08.118 Waiting for block devices as requested 00:10:08.118 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.379 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.379 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.674 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:13.674 22:06:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:13.674 22:06:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.933 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:13.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:13.933 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:14.194 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:14.454 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.454 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.454 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:14.454 22:06:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:14.714 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:14.714 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=80232 00:10:14.714 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:14.714 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:14.715 22:06:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:14.715 22:06:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:14.715 22:06:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:14.715 22:06:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:14.715 22:06:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:14.715 22:06:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:14.976 Initializing NVMe Controllers 00:10:14.976 Attaching to 0000:00:10.0 00:10:14.976 Attaching to 0000:00:11.0 00:10:14.976 Attached to 0000:00:11.0 00:10:14.976 Attached to 0000:00:10.0 00:10:14.976 Initialization complete. Starting I/O... 00:10:14.976 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:14.976 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:14.976 00:10:15.919 QEMU NVMe Ctrl (12341 ): 2468 I/Os completed (+2468) 00:10:15.919 QEMU NVMe Ctrl (12340 ): 2470 I/Os completed (+2470) 00:10:15.919 00:10:16.854 QEMU NVMe Ctrl (12341 ): 5936 I/Os completed (+3468) 00:10:16.854 QEMU NVMe Ctrl (12340 ): 5941 I/Os completed (+3471) 00:10:16.854 00:10:17.789 QEMU NVMe Ctrl (12341 ): 9752 I/Os completed (+3816) 00:10:17.789 QEMU NVMe Ctrl (12340 ): 9757 I/Os completed (+3816) 00:10:17.789 00:10:19.164 QEMU NVMe Ctrl (12341 ): 13945 I/Os completed (+4193) 00:10:19.164 QEMU NVMe Ctrl (12340 ): 14015 I/Os completed (+4258) 00:10:19.164 00:10:20.106 QEMU NVMe Ctrl (12341 ): 17518 I/Os completed (+3573) 00:10:20.106 QEMU NVMe Ctrl (12340 ): 17660 I/Os completed (+3645) 00:10:20.106 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.679 [2024-12-16 22:06:26.892584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:20.679 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:20.679 [2024-12-16 22:06:26.893527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.893569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.893582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.893594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:20.679 [2024-12-16 22:06:26.894511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.894536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.894545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.894556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:20.679 [2024-12-16 22:06:26.916351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:20.679 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:20.679 [2024-12-16 22:06:26.917124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.917157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.917170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.917182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:20.679 [2024-12-16 22:06:26.918024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.918051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.918065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 [2024-12-16 22:06:26.918075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.679 22:06:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:20.939 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:20.939 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:20.940 Attaching to 0000:00:10.0 00:10:20.940 Attached to 0000:00:10.0 00:10:20.940 QEMU NVMe Ctrl (12340 ): 68 I/Os completed (+68) 00:10:20.940 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:20.940 22:06:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:20.940 Attaching to 0000:00:11.0 00:10:20.940 Attached to 0000:00:11.0 00:10:21.883 QEMU NVMe Ctrl (12340 ): 3360 I/Os completed (+3292) 00:10:21.883 QEMU NVMe Ctrl (12341 ): 3001 I/Os completed (+3001) 00:10:21.883 00:10:22.828 QEMU NVMe Ctrl (12340 ): 6424 I/Os completed (+3064) 00:10:22.828 QEMU NVMe Ctrl (12341 ): 6069 I/Os completed (+3068) 00:10:22.828 00:10:23.771 QEMU NVMe Ctrl (12340 ): 10761 I/Os completed (+4337) 00:10:23.771 QEMU NVMe Ctrl (12341 ): 10389 I/Os completed (+4320) 00:10:23.771 00:10:25.157 QEMU NVMe Ctrl (12340 ): 14954 I/Os completed (+4193) 00:10:25.157 QEMU NVMe Ctrl (12341 ): 14567 I/Os completed (+4178) 00:10:25.157 00:10:26.100 QEMU NVMe Ctrl (12340 ): 19203 I/Os completed (+4249) 00:10:26.100 QEMU NVMe Ctrl (12341 ): 18826 I/Os completed (+4259) 00:10:26.100 00:10:27.043 QEMU NVMe Ctrl (12340 ): 23363 I/Os completed (+4160) 00:10:27.043 QEMU NVMe Ctrl (12341 ): 23012 I/Os completed (+4186) 00:10:27.043 00:10:27.985 QEMU NVMe Ctrl (12340 ): 27534 I/Os completed (+4171) 00:10:27.985 QEMU NVMe Ctrl (12341 ): 27176 I/Os completed (+4164) 00:10:27.985 00:10:28.928 QEMU NVMe Ctrl (12340 ): 31710 I/Os completed (+4176) 00:10:28.928 QEMU NVMe Ctrl (12341 ): 31353 I/Os completed (+4177) 00:10:28.928 00:10:29.869 QEMU NVMe Ctrl (12340 ): 35777 I/Os completed (+4067) 00:10:29.869 QEMU NVMe Ctrl (12341 ): 35407 I/Os completed (+4054) 00:10:29.869 00:10:30.813 QEMU NVMe Ctrl (12340 ): 38797 I/Os completed (+3020) 00:10:30.813 QEMU NVMe Ctrl (12341 ): 38437 I/Os completed (+3030) 00:10:30.813 00:10:31.754 QEMU NVMe Ctrl (12340 ): 41821 I/Os completed (+3024) 00:10:31.754 QEMU NVMe Ctrl (12341 ): 41461 I/Os completed (+3024) 00:10:31.754 00:10:33.138 QEMU NVMe Ctrl (12340 ): 45973 I/Os completed (+4152) 00:10:33.138 QEMU NVMe Ctrl (12341 ): 45605 I/Os completed (+4144) 00:10:33.138 00:10:33.138 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:33.138 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.138 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.138 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.138 [2024-12-16 22:06:39.159458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.138 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:33.139 [2024-12-16 22:06:39.160438] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.160477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.160491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.160510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:33.139 [2024-12-16 22:06:39.161725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.161759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.161772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.161785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.139 [2024-12-16 22:06:39.181611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:33.139 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:33.139 [2024-12-16 22:06:39.182553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.182586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.182602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.182616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:33.139 [2024-12-16 22:06:39.183619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.183646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.183662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 [2024-12-16 22:06:39.183673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:33.139 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:33.139 EAL: Scan for (pci) bus failed. 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:33.139 Attaching to 0000:00:10.0 00:10:33.139 Attached to 0000:00:10.0 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.139 22:06:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:33.139 Attaching to 0000:00:11.0 00:10:33.139 Attached to 0000:00:11.0 00:10:34.081 QEMU NVMe Ctrl (12340 ): 2613 I/Os completed (+2613) 00:10:34.081 QEMU NVMe Ctrl (12341 ): 2324 I/Os completed (+2324) 00:10:34.081 00:10:35.025 QEMU NVMe Ctrl (12340 ): 5838 I/Os completed (+3225) 00:10:35.025 QEMU NVMe Ctrl (12341 ): 5639 I/Os completed (+3315) 00:10:35.025 00:10:35.967 QEMU NVMe Ctrl (12340 ): 8922 I/Os completed (+3084) 00:10:35.967 QEMU NVMe Ctrl (12341 ): 8729 I/Os completed (+3090) 00:10:35.967 00:10:36.909 QEMU NVMe Ctrl (12340 ): 12321 I/Os completed (+3399) 00:10:36.909 QEMU NVMe Ctrl (12341 ): 12140 I/Os completed (+3411) 00:10:36.909 00:10:37.852 QEMU NVMe Ctrl (12340 ): 16818 I/Os completed (+4497) 00:10:37.852 QEMU NVMe Ctrl (12341 ): 16635 I/Os completed (+4495) 00:10:37.852 00:10:38.796 QEMU NVMe Ctrl (12340 ): 21053 I/Os completed (+4235) 00:10:38.796 QEMU NVMe Ctrl (12341 ): 20865 I/Os completed (+4230) 00:10:38.796 00:10:40.180 QEMU NVMe Ctrl (12340 ): 25311 I/Os completed (+4258) 00:10:40.180 QEMU NVMe Ctrl (12341 ): 25143 I/Os completed (+4278) 00:10:40.180 00:10:40.752 QEMU NVMe Ctrl (12340 ): 29398 I/Os completed (+4087) 00:10:40.752 QEMU NVMe Ctrl (12341 ): 29247 I/Os completed (+4104) 00:10:40.752 00:10:42.138 QEMU NVMe Ctrl (12340 ): 32811 I/Os completed (+3413) 00:10:42.138 QEMU NVMe Ctrl (12341 ): 32669 I/Os completed (+3422) 00:10:42.138 00:10:43.082 QEMU NVMe Ctrl (12340 ): 37006 I/Os completed (+4195) 00:10:43.082 QEMU NVMe Ctrl (12341 ): 36862 I/Os completed (+4193) 00:10:43.082 00:10:44.090 QEMU NVMe Ctrl (12340 ): 41266 I/Os completed (+4260) 00:10:44.090 QEMU NVMe Ctrl (12341 ): 41141 I/Os completed (+4279) 00:10:44.090 00:10:45.033 QEMU NVMe Ctrl (12340 ): 45477 I/Os completed (+4211) 00:10:45.033 QEMU NVMe Ctrl (12341 ): 45315 I/Os completed (+4174) 00:10:45.033 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.294 [2024-12-16 22:06:51.465186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:45.294 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:45.294 [2024-12-16 22:06:51.466014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.466050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.466063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.466080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:45.294 [2024-12-16 22:06:51.467137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.467163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.467173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.467184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:45.294 EAL: Scan for (pci) bus failed. 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.294 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.294 [2024-12-16 22:06:51.483756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.294 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:45.294 [2024-12-16 22:06:51.484473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.484502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.484514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.484524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:45.294 [2024-12-16 22:06:51.485363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.294 [2024-12-16 22:06:51.485389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.295 [2024-12-16 22:06:51.485400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.295 [2024-12-16 22:06:51.485411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.295 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:45.295 EAL: Scan for (pci) bus failed. 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:45.295 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:45.555 Attaching to 0000:00:10.0 00:10:45.555 Attached to 0000:00:10.0 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:45.555 22:06:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:45.555 Attaching to 0000:00:11.0 00:10:45.555 Attached to 0000:00:11.0 00:10:45.555 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:45.556 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:45.556 [2024-12-16 22:06:51.731625] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:57.790 22:07:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:57.790 22:07:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.790 22:07:03 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.84 00:10:57.790 22:07:03 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.84 00:10:57.790 22:07:03 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:57.790 22:07:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:57.790 22:07:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:57.790 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 22:07:03 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 80232 00:11:04.377 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (80232) - No such process 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 80232 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80782 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80782 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80782 ']' 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.377 22:07:09 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.377 22:07:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 [2024-12-16 22:07:09.825478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:04.377 [2024-12-16 22:07:09.825635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80782 ] 00:11:04.377 [2024-12-16 22:07:09.987551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.377 [2024-12-16 22:07:10.016368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:04.378 22:07:10 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:04.378 22:07:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.962 22:07:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.962 22:07:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.962 22:07:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.962 22:07:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.962 [2024-12-16 22:07:16.772198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.962 [2024-12-16 22:07:16.773285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:16.773320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:16.773337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:16.773350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:16.773358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:16.773366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:16.773375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:16.773381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:16.773389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:16.773395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:16.773403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:16.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:17.172193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.962 [2024-12-16 22:07:17.173245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:17.173277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:17.173287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:17.173299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:17.173305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:17.173314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:17.173320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.962 [2024-12-16 22:07:17.173327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.962 [2024-12-16 22:07:17.173334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.962 [2024-12-16 22:07:17.173343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.963 [2024-12-16 22:07:17.173349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.963 [2024-12-16 22:07:17.173357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.963 22:07:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.963 22:07:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.963 22:07:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.963 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.224 22:07:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 22:07:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:23.460 22:07:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.460 [2024-12-16 22:07:29.672371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.460 [2024-12-16 22:07:29.673427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.460 [2024-12-16 22:07:29.673460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.460 [2024-12-16 22:07:29.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.460 [2024-12-16 22:07:29.673484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.460 [2024-12-16 22:07:29.673492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.460 [2024-12-16 22:07:29.673499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.460 [2024-12-16 22:07:29.673507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.460 [2024-12-16 22:07:29.673513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.460 [2024-12-16 22:07:29.673521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.460 [2024-12-16 22:07:29.673527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.460 [2024-12-16 22:07:29.673534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.460 [2024-12-16 22:07:29.673541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.046 22:07:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.046 22:07:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.046 22:07:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.046 [2024-12-16 22:07:30.172377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.046 [2024-12-16 22:07:30.173404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.046 [2024-12-16 22:07:30.173525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.046 [2024-12-16 22:07:30.173540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.046 [2024-12-16 22:07:30.173552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.046 [2024-12-16 22:07:30.173559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.046 [2024-12-16 22:07:30.173568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.046 [2024-12-16 22:07:30.173574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.046 [2024-12-16 22:07:30.173581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.046 [2024-12-16 22:07:30.173588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.046 [2024-12-16 22:07:30.173596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.046 [2024-12-16 22:07:30.173602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.046 [2024-12-16 22:07:30.173610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:24.046 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.623 22:07:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.623 22:07:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.623 22:07:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.623 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.884 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.884 22:07:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.170 22:07:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.170 22:07:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.170 22:07:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.170 22:07:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.170 22:07:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.170 22:07:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.170 [2024-12-16 22:07:43.072571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:37.170 [2024-12-16 22:07:43.073614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.170 [2024-12-16 22:07:43.073645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.170 [2024-12-16 22:07:43.073659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.170 [2024-12-16 22:07:43.073671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.170 [2024-12-16 22:07:43.073679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.170 [2024-12-16 22:07:43.073686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.170 [2024-12-16 22:07:43.073694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.170 [2024-12-16 22:07:43.073700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.170 [2024-12-16 22:07:43.073708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.170 [2024-12-16 22:07:43.073714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.170 [2024-12-16 22:07:43.073722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.170 [2024-12-16 22:07:43.073728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.170 22:07:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:37.170 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.430 [2024-12-16 22:07:43.572575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.430 [2024-12-16 22:07:43.573708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.430 [2024-12-16 22:07:43.573740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.430 [2024-12-16 22:07:43.573750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.430 [2024-12-16 22:07:43.573761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.430 [2024-12-16 22:07:43.573768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.431 [2024-12-16 22:07:43.573778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.431 [2024-12-16 22:07:43.573784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.431 [2024-12-16 22:07:43.573792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.431 [2024-12-16 22:07:43.573801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.431 [2024-12-16 22:07:43.573808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.431 [2024-12-16 22:07:43.573814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.431 [2024-12-16 22:07:43.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.431 22:07:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.431 22:07:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.431 22:07:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.431 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.690 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.690 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.690 22:07:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:11:49.911 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.911 22:07:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.911 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:49.912 22:07:55 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:49.912 22:07:55 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:49.912 22:07:55 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:49.912 22:07:55 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:49.912 22:07:55 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:49.912 22:07:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.495 22:08:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.495 22:08:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.495 22:08:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.495 22:08:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.495 [2024-12-16 22:08:02.015687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.495 [2024-12-16 22:08:02.016654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.016687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.016712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.016723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.016738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.016745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.016755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.016761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.016768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.016775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.415688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.495 [2024-12-16 22:08:02.417903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.417939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.417949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.417959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.417966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.417974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.417980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.417988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.417994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 [2024-12-16 22:08:02.418001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.495 [2024-12-16 22:08:02.418007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.495 [2024-12-16 22:08:02.418017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.495 22:08:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.495 22:08:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.495 22:08:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.495 22:08:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.728 22:08:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:08.728 22:08:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.728 [2024-12-16 22:08:14.915891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:08.728 [2024-12-16 22:08:14.916964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.728 [2024-12-16 22:08:14.916992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.728 [2024-12-16 22:08:14.917004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.728 [2024-12-16 22:08:14.917016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.728 [2024-12-16 22:08:14.917024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.728 [2024-12-16 22:08:14.917032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.728 [2024-12-16 22:08:14.917040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.728 [2024-12-16 22:08:14.917046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.728 [2024-12-16 22:08:14.917054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.728 [2024-12-16 22:08:14.917060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.728 [2024-12-16 22:08:14.917067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.728 [2024-12-16 22:08:14.917074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.988 [2024-12-16 22:08:15.315895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:08.988 [2024-12-16 22:08:15.316887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.988 [2024-12-16 22:08:15.316918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.988 [2024-12-16 22:08:15.316927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.988 [2024-12-16 22:08:15.316938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.988 [2024-12-16 22:08:15.316945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.988 [2024-12-16 22:08:15.316953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.988 [2024-12-16 22:08:15.316960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.988 [2024-12-16 22:08:15.316968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.988 [2024-12-16 22:08:15.316974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.988 [2024-12-16 22:08:15.316982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.988 [2024-12-16 22:08:15.316988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.988 [2024-12-16 22:08:15.316996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.249 22:08:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.249 22:08:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.249 22:08:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.249 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:09.510 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:09.510 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.510 22:08:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.749 22:08:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.749 22:08:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.749 22:08:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.749 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.749 [2024-12-16 22:08:27.716096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:21.749 [2024-12-16 22:08:27.717091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.749 [2024-12-16 22:08:27.717190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.749 [2024-12-16 22:08:27.717256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.749 [2024-12-16 22:08:27.717353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.749 [2024-12-16 22:08:27.717377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.750 [2024-12-16 22:08:27.717437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.750 [2024-12-16 22:08:27.717464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.750 [2024-12-16 22:08:27.717499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.750 [2024-12-16 22:08:27.717559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.750 [2024-12-16 22:08:27.717618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.750 [2024-12-16 22:08:27.717637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.750 [2024-12-16 22:08:27.717685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.750 22:08:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.750 22:08:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.750 22:08:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:21.750 22:08:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.011 [2024-12-16 22:08:28.116098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:22.011 [2024-12-16 22:08:28.117131] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.011 [2024-12-16 22:08:28.117226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.011 [2024-12-16 22:08:28.117286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.011 [2024-12-16 22:08:28.117315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.011 [2024-12-16 22:08:28.117332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.011 [2024-12-16 22:08:28.117383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.011 [2024-12-16 22:08:28.117407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.011 [2024-12-16 22:08:28.117427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.011 [2024-12-16 22:08:28.117471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.011 [2024-12-16 22:08:28.117499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.011 [2024-12-16 22:08:28.117542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.011 [2024-12-16 22:08:28.117570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.011 22:08:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.011 22:08:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.011 22:08:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:22.011 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.272 22:08:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.66 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.66 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.66 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.66 2 00:12:34.509 remove_attach_helper took 44.66s to complete (handling 2 nvme drive(s)) 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:34.509 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80782 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80782 ']' 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80782 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80782 00:12:34.509 killing process with pid 80782 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80782' 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80782 00:12:34.509 22:08:40 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80782 00:12:34.770 22:08:40 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:35.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.346 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.346 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.607 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.607 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.607 00:12:35.607 real 2m28.528s 00:12:35.607 user 1m48.904s 00:12:35.607 sys 0m18.120s 00:12:35.607 22:08:41 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.607 ************************************ 00:12:35.607 22:08:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.607 END TEST sw_hotplug 00:12:35.607 ************************************ 00:12:35.607 22:08:41 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:35.607 22:08:41 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:35.607 22:08:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.607 22:08:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.607 22:08:41 -- common/autotest_common.sh@10 -- # set +x 00:12:35.607 ************************************ 00:12:35.607 START TEST nvme_xnvme 00:12:35.607 ************************************ 00:12:35.607 22:08:41 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:35.607 * Looking for test storage... 00:12:35.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.870 22:08:41 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.870 22:08:41 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.870 22:08:41 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.870 22:08:42 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.870 22:08:42 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.871 22:08:42 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.871 --rc genhtml_branch_coverage=1 00:12:35.871 --rc genhtml_function_coverage=1 00:12:35.871 --rc genhtml_legend=1 00:12:35.871 --rc geninfo_all_blocks=1 00:12:35.871 --rc geninfo_unexecuted_blocks=1 00:12:35.871 00:12:35.871 ' 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.871 --rc genhtml_branch_coverage=1 00:12:35.871 --rc genhtml_function_coverage=1 00:12:35.871 --rc genhtml_legend=1 00:12:35.871 --rc geninfo_all_blocks=1 00:12:35.871 --rc geninfo_unexecuted_blocks=1 00:12:35.871 00:12:35.871 ' 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.871 --rc genhtml_branch_coverage=1 00:12:35.871 --rc genhtml_function_coverage=1 00:12:35.871 --rc genhtml_legend=1 00:12:35.871 --rc geninfo_all_blocks=1 00:12:35.871 --rc geninfo_unexecuted_blocks=1 00:12:35.871 00:12:35.871 ' 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.871 --rc genhtml_branch_coverage=1 00:12:35.871 --rc genhtml_function_coverage=1 00:12:35.871 --rc genhtml_legend=1 00:12:35.871 --rc geninfo_all_blocks=1 00:12:35.871 --rc geninfo_unexecuted_blocks=1 00:12:35.871 00:12:35.871 ' 00:12:35.871 22:08:42 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:35.871 22:08:42 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:35.871 22:08:42 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:35.871 22:08:42 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:35.872 22:08:42 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:35.872 #define SPDK_CONFIG_H 00:12:35.872 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:35.872 #define SPDK_CONFIG_APPS 1 00:12:35.872 #define SPDK_CONFIG_ARCH native 00:12:35.872 #define SPDK_CONFIG_ASAN 1 00:12:35.872 #undef SPDK_CONFIG_AVAHI 00:12:35.872 #undef SPDK_CONFIG_CET 00:12:35.872 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:35.872 #define SPDK_CONFIG_COVERAGE 1 00:12:35.872 #define SPDK_CONFIG_CROSS_PREFIX 00:12:35.872 #undef SPDK_CONFIG_CRYPTO 00:12:35.872 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:35.872 #undef SPDK_CONFIG_CUSTOMOCF 00:12:35.872 #undef SPDK_CONFIG_DAOS 00:12:35.872 #define SPDK_CONFIG_DAOS_DIR 00:12:35.872 #define SPDK_CONFIG_DEBUG 1 00:12:35.872 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:35.872 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:35.872 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:35.872 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:35.872 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:35.872 #undef SPDK_CONFIG_DPDK_UADK 00:12:35.872 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:35.872 #define SPDK_CONFIG_EXAMPLES 1 00:12:35.872 #undef SPDK_CONFIG_FC 00:12:35.872 #define SPDK_CONFIG_FC_PATH 00:12:35.872 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:35.872 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:35.872 #define SPDK_CONFIG_FSDEV 1 00:12:35.872 #undef SPDK_CONFIG_FUSE 00:12:35.872 #undef SPDK_CONFIG_FUZZER 00:12:35.872 #define SPDK_CONFIG_FUZZER_LIB 00:12:35.872 #undef SPDK_CONFIG_GOLANG 00:12:35.872 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:35.872 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:35.872 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:35.872 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:35.872 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:35.872 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:35.872 #undef SPDK_CONFIG_HAVE_LZ4 00:12:35.872 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:35.872 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:35.872 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:35.872 #define SPDK_CONFIG_IDXD 1 00:12:35.872 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:35.872 #undef SPDK_CONFIG_IPSEC_MB 00:12:35.872 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:35.872 #define SPDK_CONFIG_ISAL 1 00:12:35.872 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:35.872 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:35.872 #define SPDK_CONFIG_LIBDIR 00:12:35.872 #undef SPDK_CONFIG_LTO 00:12:35.872 #define SPDK_CONFIG_MAX_LCORES 128 00:12:35.872 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:35.872 #define SPDK_CONFIG_NVME_CUSE 1 00:12:35.872 #undef SPDK_CONFIG_OCF 00:12:35.872 #define SPDK_CONFIG_OCF_PATH 00:12:35.872 #define SPDK_CONFIG_OPENSSL_PATH 00:12:35.872 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:35.872 #define SPDK_CONFIG_PGO_DIR 00:12:35.872 #undef SPDK_CONFIG_PGO_USE 00:12:35.872 #define SPDK_CONFIG_PREFIX /usr/local 00:12:35.872 #undef SPDK_CONFIG_RAID5F 00:12:35.872 #undef SPDK_CONFIG_RBD 00:12:35.872 #define SPDK_CONFIG_RDMA 1 00:12:35.872 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:35.872 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:35.872 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:35.872 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:35.872 #define SPDK_CONFIG_SHARED 1 00:12:35.872 #undef SPDK_CONFIG_SMA 00:12:35.872 #define SPDK_CONFIG_TESTS 1 00:12:35.872 #undef SPDK_CONFIG_TSAN 00:12:35.872 #define SPDK_CONFIG_UBLK 1 00:12:35.872 #define SPDK_CONFIG_UBSAN 1 00:12:35.872 #undef SPDK_CONFIG_UNIT_TESTS 00:12:35.872 #undef SPDK_CONFIG_URING 00:12:35.872 #define SPDK_CONFIG_URING_PATH 00:12:35.872 #undef SPDK_CONFIG_URING_ZNS 00:12:35.872 #undef SPDK_CONFIG_USDT 00:12:35.872 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:35.872 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:35.872 #undef SPDK_CONFIG_VFIO_USER 00:12:35.872 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:35.872 #define SPDK_CONFIG_VHOST 1 00:12:35.872 #define SPDK_CONFIG_VIRTIO 1 00:12:35.872 #undef SPDK_CONFIG_VTUNE 00:12:35.872 #define SPDK_CONFIG_VTUNE_DIR 00:12:35.872 #define SPDK_CONFIG_WERROR 1 00:12:35.872 #define SPDK_CONFIG_WPDK_DIR 00:12:35.872 #define SPDK_CONFIG_XNVME 1 00:12:35.872 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:35.872 22:08:42 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.872 22:08:42 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.872 22:08:42 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.872 22:08:42 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.872 22:08:42 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.872 22:08:42 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.872 22:08:42 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.872 22:08:42 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.872 22:08:42 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:35.872 22:08:42 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:35.872 22:08:42 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:35.872 22:08:42 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:35.873 22:08:42 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 82120 ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 82120 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.KrSRsm 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.KrSRsm/tests/xnvme /tmp/spdk.KrSRsm 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13240692736 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6343249920 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13240692736 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6343249920 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.874 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98482450432 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1220329472 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:35.875 * Looking for test storage... 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13240692736 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:35.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.875 --rc genhtml_branch_coverage=1 00:12:35.875 --rc genhtml_function_coverage=1 00:12:35.875 --rc genhtml_legend=1 00:12:35.875 --rc geninfo_all_blocks=1 00:12:35.875 --rc geninfo_unexecuted_blocks=1 00:12:35.875 00:12:35.875 ' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:35.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.875 --rc genhtml_branch_coverage=1 00:12:35.875 --rc genhtml_function_coverage=1 00:12:35.875 --rc genhtml_legend=1 00:12:35.875 --rc geninfo_all_blocks=1 00:12:35.875 --rc geninfo_unexecuted_blocks=1 00:12:35.875 00:12:35.875 ' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:35.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.875 --rc genhtml_branch_coverage=1 00:12:35.875 --rc genhtml_function_coverage=1 00:12:35.875 --rc genhtml_legend=1 00:12:35.875 --rc geninfo_all_blocks=1 00:12:35.875 --rc geninfo_unexecuted_blocks=1 00:12:35.875 00:12:35.875 ' 00:12:35.875 22:08:42 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:35.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.875 --rc genhtml_branch_coverage=1 00:12:35.875 --rc genhtml_function_coverage=1 00:12:35.875 --rc genhtml_legend=1 00:12:35.875 --rc geninfo_all_blocks=1 00:12:35.875 --rc geninfo_unexecuted_blocks=1 00:12:35.875 00:12:35.875 ' 00:12:35.875 22:08:42 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.875 22:08:42 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.875 22:08:42 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.875 22:08:42 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.136 22:08:42 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.136 22:08:42 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:36.136 22:08:42 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:36.136 22:08:42 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:36.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:36.398 Waiting for block devices as requested 00:12:36.398 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.659 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.659 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.659 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.948 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:41.948 22:08:48 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:42.209 22:08:48 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:42.209 22:08:48 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:42.470 22:08:48 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:42.470 No valid GPT data, bailing 00:12:42.470 22:08:48 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:42.470 22:08:48 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:42.470 22:08:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:42.470 22:08:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.470 22:08:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.470 22:08:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.732 ************************************ 00:12:42.732 START TEST xnvme_rpc 00:12:42.732 ************************************ 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82515 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82515 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82515 ']' 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:42.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.732 22:08:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.732 [2024-12-16 22:08:48.915736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:42.732 [2024-12-16 22:08:48.915908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82515 ] 00:12:42.732 [2024-12-16 22:08:49.075917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.993 [2024-12-16 22:08:49.104996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 xnvme_bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82515 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82515 ']' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82515 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.565 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82515 00:12:43.826 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.826 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.826 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82515' 00:12:43.826 killing process with pid 82515 00:12:43.826 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82515 00:12:43.826 22:08:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82515 00:12:44.087 00:12:44.088 real 0m1.370s 00:12:44.088 user 0m1.406s 00:12:44.088 sys 0m0.405s 00:12:44.088 22:08:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.088 ************************************ 00:12:44.088 END TEST xnvme_rpc 00:12:44.088 ************************************ 00:12:44.088 22:08:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.088 22:08:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:44.088 22:08:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.088 22:08:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.088 22:08:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.088 ************************************ 00:12:44.088 START TEST xnvme_bdevperf 00:12:44.088 ************************************ 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:44.088 22:08:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.088 { 00:12:44.088 "subsystems": [ 00:12:44.088 { 00:12:44.088 "subsystem": "bdev", 00:12:44.088 "config": [ 00:12:44.088 { 00:12:44.088 "params": { 00:12:44.088 "io_mechanism": "libaio", 00:12:44.088 "conserve_cpu": false, 00:12:44.088 "filename": "/dev/nvme0n1", 00:12:44.088 "name": "xnvme_bdev" 00:12:44.088 }, 00:12:44.088 "method": "bdev_xnvme_create" 00:12:44.088 }, 00:12:44.088 { 00:12:44.088 "method": "bdev_wait_for_examine" 00:12:44.088 } 00:12:44.088 ] 00:12:44.088 } 00:12:44.088 ] 00:12:44.088 } 00:12:44.088 [2024-12-16 22:08:50.333063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:44.088 [2024-12-16 22:08:50.333193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82573 ] 00:12:44.348 [2024-12-16 22:08:50.492548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.348 [2024-12-16 22:08:50.521655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.348 Running I/O for 5 seconds... 00:12:46.303 24889.00 IOPS, 97.22 MiB/s [2024-12-16T22:08:54.037Z] 25567.50 IOPS, 99.87 MiB/s [2024-12-16T22:08:54.980Z] 25858.33 IOPS, 101.01 MiB/s [2024-12-16T22:08:55.925Z] 25982.00 IOPS, 101.49 MiB/s [2024-12-16T22:08:55.925Z] 26067.20 IOPS, 101.83 MiB/s 00:12:49.578 Latency(us) 00:12:49.578 [2024-12-16T22:08:55.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.578 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.578 xnvme_bdev : 5.01 26021.79 101.65 0.00 0.00 2454.41 497.82 8368.44 00:12:49.578 [2024-12-16T22:08:55.925Z] =================================================================================================================== 00:12:49.578 [2024-12-16T22:08:55.925Z] Total : 26021.79 101.65 0.00 0.00 2454.41 497.82 8368.44 00:12:49.578 22:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:49.578 22:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:49.578 22:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:49.578 22:08:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:49.578 22:08:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.578 { 00:12:49.578 "subsystems": [ 00:12:49.578 { 00:12:49.578 "subsystem": "bdev", 00:12:49.578 "config": [ 00:12:49.578 { 00:12:49.578 "params": { 00:12:49.578 "io_mechanism": "libaio", 00:12:49.578 "conserve_cpu": false, 00:12:49.578 "filename": "/dev/nvme0n1", 00:12:49.578 "name": "xnvme_bdev" 00:12:49.578 }, 00:12:49.578 "method": "bdev_xnvme_create" 00:12:49.578 }, 00:12:49.578 { 00:12:49.578 "method": "bdev_wait_for_examine" 00:12:49.578 } 00:12:49.578 ] 00:12:49.578 } 00:12:49.578 ] 00:12:49.578 } 00:12:49.578 [2024-12-16 22:08:55.921593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:49.578 [2024-12-16 22:08:55.921734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82637 ] 00:12:49.840 [2024-12-16 22:08:56.081314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.840 [2024-12-16 22:08:56.110042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.101 Running I/O for 5 seconds... 00:12:51.988 31910.00 IOPS, 124.65 MiB/s [2024-12-16T22:08:59.278Z] 31552.50 IOPS, 123.25 MiB/s [2024-12-16T22:09:00.663Z] 32021.00 IOPS, 125.08 MiB/s [2024-12-16T22:09:01.236Z] 32068.25 IOPS, 125.27 MiB/s [2024-12-16T22:09:01.236Z] 32290.80 IOPS, 126.14 MiB/s 00:12:54.889 Latency(us) 00:12:54.889 [2024-12-16T22:09:01.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.889 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:54.889 xnvme_bdev : 5.01 32245.20 125.96 0.00 0.00 1979.86 460.01 7158.55 00:12:54.889 [2024-12-16T22:09:01.236Z] =================================================================================================================== 00:12:54.889 [2024-12-16T22:09:01.236Z] Total : 32245.20 125.96 0.00 0.00 1979.86 460.01 7158.55 00:12:55.149 00:12:55.149 real 0m11.160s 00:12:55.149 user 0m3.450s 00:12:55.149 sys 0m6.358s 00:12:55.149 22:09:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.149 ************************************ 00:12:55.149 END TEST xnvme_bdevperf 00:12:55.149 ************************************ 00:12:55.149 22:09:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.149 22:09:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:55.149 22:09:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:55.149 22:09:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.149 22:09:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.149 ************************************ 00:12:55.149 START TEST xnvme_fio_plugin 00:12:55.149 ************************************ 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:55.150 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:55.411 22:09:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:55.411 { 00:12:55.411 "subsystems": [ 00:12:55.411 { 00:12:55.411 "subsystem": "bdev", 00:12:55.411 "config": [ 00:12:55.411 { 00:12:55.411 "params": { 00:12:55.411 "io_mechanism": "libaio", 00:12:55.411 "conserve_cpu": false, 00:12:55.411 "filename": "/dev/nvme0n1", 00:12:55.411 "name": "xnvme_bdev" 00:12:55.411 }, 00:12:55.411 "method": "bdev_xnvme_create" 00:12:55.411 }, 00:12:55.411 { 00:12:55.411 "method": "bdev_wait_for_examine" 00:12:55.411 } 00:12:55.411 ] 00:12:55.411 } 00:12:55.411 ] 00:12:55.411 } 00:12:55.411 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:55.411 fio-3.35 00:12:55.411 Starting 1 thread 00:13:02.071 00:13:02.071 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82745: Mon Dec 16 22:09:07 2024 00:13:02.071 read: IOPS=33.4k, BW=130MiB/s (137MB/s)(652MiB/5002msec) 00:13:02.071 slat (usec): min=4, max=1998, avg=18.21, stdev=89.05 00:13:02.071 clat (usec): min=107, max=4893, avg=1405.03, stdev=494.48 00:13:02.071 lat (usec): min=201, max=5251, avg=1423.25, stdev=484.95 00:13:02.071 clat percentiles (usec): 00:13:02.071 | 1.00th=[ 302], 5.00th=[ 619], 10.00th=[ 783], 20.00th=[ 996], 00:13:02.071 | 30.00th=[ 1156], 40.00th=[ 1287], 50.00th=[ 1418], 60.00th=[ 1532], 00:13:02.071 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 1958], 95.00th=[ 2180], 00:13:02.071 | 99.00th=[ 2868], 99.50th=[ 3130], 99.90th=[ 3818], 99.95th=[ 4080], 00:13:02.071 | 99.99th=[ 4686] 00:13:02.071 bw ( KiB/s): min=127136, max=142712, per=100.00%, avg=134395.56, stdev=5057.04, samples=9 00:13:02.071 iops : min=31784, max=35678, avg=33598.89, stdev=1264.26, samples=9 00:13:02.071 lat (usec) : 250=0.49%, 500=2.51%, 750=5.59%, 1000=11.55% 00:13:02.071 lat (msec) : 2=71.08%, 4=8.73%, 10=0.06% 00:13:02.071 cpu : usr=51.95%, sys=40.85%, ctx=13, majf=0, minf=773 00:13:02.071 IO depths : 1=0.7%, 2=1.6%, 4=3.8%, 8=8.9%, 16=22.4%, 32=60.5%, >=64=2.1% 00:13:02.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.071 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:02.071 issued rwts: total=166981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.071 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:02.071 00:13:02.071 Run status group 0 (all jobs): 00:13:02.071 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=652MiB (684MB), run=5002-5002msec 00:13:02.071 ----------------------------------------------------- 00:13:02.071 Suppressions used: 00:13:02.071 count bytes template 00:13:02.071 1 11 /usr/src/fio/parse.c 00:13:02.071 1 8 libtcmalloc_minimal.so 00:13:02.071 1 904 libcrypto.so 00:13:02.071 ----------------------------------------------------- 00:13:02.071 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:02.071 22:09:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.071 { 00:13:02.071 "subsystems": [ 00:13:02.071 { 00:13:02.071 "subsystem": "bdev", 00:13:02.071 "config": [ 00:13:02.071 { 00:13:02.071 "params": { 00:13:02.071 "io_mechanism": "libaio", 00:13:02.071 "conserve_cpu": false, 00:13:02.071 "filename": "/dev/nvme0n1", 00:13:02.071 "name": "xnvme_bdev" 00:13:02.071 }, 00:13:02.071 "method": "bdev_xnvme_create" 00:13:02.071 }, 00:13:02.071 { 00:13:02.071 "method": "bdev_wait_for_examine" 00:13:02.071 } 00:13:02.071 ] 00:13:02.071 } 00:13:02.071 ] 00:13:02.071 } 00:13:02.071 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:02.071 fio-3.35 00:13:02.071 Starting 1 thread 00:13:07.376 00:13:07.376 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82832: Mon Dec 16 22:09:13 2024 00:13:07.376 write: IOPS=36.4k, BW=142MiB/s (149MB/s)(710MiB/5001msec); 0 zone resets 00:13:07.376 slat (usec): min=4, max=2232, avg=17.91, stdev=82.05 00:13:07.376 clat (usec): min=106, max=7192, avg=1266.30, stdev=497.67 00:13:07.376 lat (usec): min=199, max=7197, avg=1284.21, stdev=490.63 00:13:07.376 clat percentiles (usec): 00:13:07.376 | 1.00th=[ 297], 5.00th=[ 537], 10.00th=[ 685], 20.00th=[ 865], 00:13:07.376 | 30.00th=[ 996], 40.00th=[ 1123], 50.00th=[ 1221], 60.00th=[ 1352], 00:13:07.376 | 70.00th=[ 1483], 80.00th=[ 1631], 90.00th=[ 1860], 95.00th=[ 2089], 00:13:07.376 | 99.00th=[ 2769], 99.50th=[ 3163], 99.90th=[ 3818], 99.95th=[ 4146], 00:13:07.376 | 99.99th=[ 5014] 00:13:07.376 bw ( KiB/s): min=142352, max=149632, per=99.92%, avg=145296.89, stdev=2995.25, samples=9 00:13:07.376 iops : min=35588, max=37408, avg=36324.22, stdev=748.81, samples=9 00:13:07.376 lat (usec) : 250=0.54%, 500=3.60%, 750=9.14%, 1000=17.21% 00:13:07.376 lat (msec) : 2=62.91%, 4=6.53%, 10=0.07% 00:13:07.376 cpu : usr=48.08%, sys=42.58%, ctx=31, majf=0, minf=774 00:13:07.376 IO depths : 1=0.6%, 2=1.4%, 4=3.3%, 8=8.3%, 16=22.2%, 32=62.1%, >=64=2.2% 00:13:07.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.376 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:07.376 issued rwts: total=0,181806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.376 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:07.376 00:13:07.376 Run status group 0 (all jobs): 00:13:07.376 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=710MiB (745MB), run=5001-5001msec 00:13:07.376 ----------------------------------------------------- 00:13:07.376 Suppressions used: 00:13:07.376 count bytes template 00:13:07.376 1 11 /usr/src/fio/parse.c 00:13:07.376 1 8 libtcmalloc_minimal.so 00:13:07.376 1 904 libcrypto.so 00:13:07.376 ----------------------------------------------------- 00:13:07.376 00:13:07.376 00:13:07.376 real 0m12.082s 00:13:07.376 user 0m6.112s 00:13:07.376 sys 0m4.752s 00:13:07.376 22:09:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.376 ************************************ 00:13:07.376 END TEST xnvme_fio_plugin 00:13:07.376 22:09:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:07.376 ************************************ 00:13:07.376 22:09:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:07.376 22:09:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:07.376 22:09:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:07.376 22:09:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:07.376 22:09:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.376 22:09:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.376 22:09:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.376 ************************************ 00:13:07.376 START TEST xnvme_rpc 00:13:07.376 ************************************ 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82907 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82907 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82907 ']' 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:07.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.376 22:09:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.638 [2024-12-16 22:09:13.731983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:07.638 [2024-12-16 22:09:13.732140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82907 ] 00:13:07.638 [2024-12-16 22:09:13.893545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.638 [2024-12-16 22:09:13.923244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 xnvme_bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82907 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82907 ']' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82907 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82907 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.581 killing process with pid 82907 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82907' 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82907 00:13:08.581 22:09:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82907 00:13:08.842 00:13:08.842 real 0m1.423s 00:13:08.842 user 0m1.475s 00:13:08.842 sys 0m0.400s 00:13:08.842 22:09:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.842 ************************************ 00:13:08.842 END TEST xnvme_rpc 00:13:08.842 22:09:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.842 ************************************ 00:13:08.842 22:09:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:08.842 22:09:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:08.842 22:09:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.842 22:09:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.842 ************************************ 00:13:08.842 START TEST xnvme_bdevperf 00:13:08.842 ************************************ 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:08.842 22:09:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:08.842 { 00:13:08.842 "subsystems": [ 00:13:08.842 { 00:13:08.842 "subsystem": "bdev", 00:13:08.842 "config": [ 00:13:08.842 { 00:13:08.842 "params": { 00:13:08.842 "io_mechanism": "libaio", 00:13:08.842 "conserve_cpu": true, 00:13:08.842 "filename": "/dev/nvme0n1", 00:13:08.842 "name": "xnvme_bdev" 00:13:08.842 }, 00:13:08.842 "method": "bdev_xnvme_create" 00:13:08.842 }, 00:13:08.842 { 00:13:08.842 "method": "bdev_wait_for_examine" 00:13:08.842 } 00:13:08.842 ] 00:13:08.842 } 00:13:08.842 ] 00:13:08.842 } 00:13:09.103 [2024-12-16 22:09:15.209817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:09.103 [2024-12-16 22:09:15.209975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82965 ] 00:13:09.103 [2024-12-16 22:09:15.372069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.103 [2024-12-16 22:09:15.400691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.364 Running I/O for 5 seconds... 00:13:11.248 31633.00 IOPS, 123.57 MiB/s [2024-12-16T22:09:18.537Z] 30699.50 IOPS, 119.92 MiB/s [2024-12-16T22:09:19.924Z] 30506.33 IOPS, 119.17 MiB/s [2024-12-16T22:09:20.867Z] 30626.75 IOPS, 119.64 MiB/s 00:13:14.520 Latency(us) 00:13:14.520 [2024-12-16T22:09:20.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.520 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:14.520 xnvme_bdev : 5.00 30715.90 119.98 0.00 0.00 2078.80 466.31 11645.24 00:13:14.520 [2024-12-16T22:09:20.867Z] =================================================================================================================== 00:13:14.520 [2024-12-16T22:09:20.867Z] Total : 30715.90 119.98 0.00 0.00 2078.80 466.31 11645.24 00:13:14.520 22:09:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.520 22:09:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:14.520 22:09:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:14.520 22:09:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:14.520 22:09:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:14.520 { 00:13:14.520 "subsystems": [ 00:13:14.520 { 00:13:14.520 "subsystem": "bdev", 00:13:14.520 "config": [ 00:13:14.520 { 00:13:14.520 "params": { 00:13:14.520 "io_mechanism": "libaio", 00:13:14.520 "conserve_cpu": true, 00:13:14.520 "filename": "/dev/nvme0n1", 00:13:14.520 "name": "xnvme_bdev" 00:13:14.520 }, 00:13:14.520 "method": "bdev_xnvme_create" 00:13:14.520 }, 00:13:14.520 { 00:13:14.520 "method": "bdev_wait_for_examine" 00:13:14.520 } 00:13:14.520 ] 00:13:14.520 } 00:13:14.520 ] 00:13:14.520 } 00:13:14.520 [2024-12-16 22:09:20.792876] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:14.520 [2024-12-16 22:09:20.793026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83029 ] 00:13:14.781 [2024-12-16 22:09:20.954395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.781 [2024-12-16 22:09:20.982890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.781 Running I/O for 5 seconds... 00:13:17.111 34771.00 IOPS, 135.82 MiB/s [2024-12-16T22:09:24.401Z] 36108.50 IOPS, 141.05 MiB/s [2024-12-16T22:09:25.345Z] 35373.67 IOPS, 138.18 MiB/s [2024-12-16T22:09:26.287Z] 34994.00 IOPS, 136.70 MiB/s [2024-12-16T22:09:26.287Z] 34998.80 IOPS, 136.71 MiB/s 00:13:19.940 Latency(us) 00:13:19.940 [2024-12-16T22:09:26.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.940 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:19.940 xnvme_bdev : 5.01 34955.84 136.55 0.00 0.00 1826.47 385.97 7259.37 00:13:19.940 [2024-12-16T22:09:26.287Z] =================================================================================================================== 00:13:19.940 [2024-12-16T22:09:26.287Z] Total : 34955.84 136.55 0.00 0.00 1826.47 385.97 7259.37 00:13:20.201 00:13:20.201 real 0m11.190s 00:13:20.201 user 0m3.397s 00:13:20.201 sys 0m6.289s 00:13:20.201 22:09:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.201 22:09:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.201 ************************************ 00:13:20.201 END TEST xnvme_bdevperf 00:13:20.201 ************************************ 00:13:20.201 22:09:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:20.201 22:09:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.201 22:09:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.201 22:09:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.201 ************************************ 00:13:20.201 START TEST xnvme_fio_plugin 00:13:20.201 ************************************ 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.201 22:09:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.201 { 00:13:20.201 "subsystems": [ 00:13:20.201 { 00:13:20.201 "subsystem": "bdev", 00:13:20.201 "config": [ 00:13:20.201 { 00:13:20.201 "params": { 00:13:20.201 "io_mechanism": "libaio", 00:13:20.202 "conserve_cpu": true, 00:13:20.202 "filename": "/dev/nvme0n1", 00:13:20.202 "name": "xnvme_bdev" 00:13:20.202 }, 00:13:20.202 "method": "bdev_xnvme_create" 00:13:20.202 }, 00:13:20.202 { 00:13:20.202 "method": "bdev_wait_for_examine" 00:13:20.202 } 00:13:20.202 ] 00:13:20.202 } 00:13:20.202 ] 00:13:20.202 } 00:13:20.462 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:20.463 fio-3.35 00:13:20.463 Starting 1 thread 00:13:25.758 00:13:25.758 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83137: Mon Dec 16 22:09:32 2024 00:13:25.758 read: IOPS=33.1k, BW=129MiB/s (136MB/s)(647MiB/5001msec) 00:13:25.758 slat (usec): min=4, max=2206, avg=20.75, stdev=97.08 00:13:25.758 clat (usec): min=106, max=4682, avg=1367.64, stdev=510.26 00:13:25.758 lat (usec): min=217, max=4808, avg=1388.39, stdev=500.40 00:13:25.758 clat percentiles (usec): 00:13:25.758 | 1.00th=[ 289], 5.00th=[ 586], 10.00th=[ 750], 20.00th=[ 947], 00:13:25.758 | 30.00th=[ 1090], 40.00th=[ 1221], 50.00th=[ 1352], 60.00th=[ 1483], 00:13:25.758 | 70.00th=[ 1598], 80.00th=[ 1745], 90.00th=[ 1975], 95.00th=[ 2212], 00:13:25.758 | 99.00th=[ 2835], 99.50th=[ 3163], 99.90th=[ 3884], 99.95th=[ 4047], 00:13:25.758 | 99.99th=[ 4490] 00:13:25.758 bw ( KiB/s): min=128512, max=138160, per=100.00%, avg=132736.89, stdev=3755.90, samples=9 00:13:25.758 iops : min=32128, max=34540, avg=33184.22, stdev=938.97, samples=9 00:13:25.758 lat (usec) : 250=0.57%, 500=2.81%, 750=6.59%, 1000=13.42% 00:13:25.758 lat (msec) : 2=67.62%, 4=8.93%, 10=0.07% 00:13:25.758 cpu : usr=44.50%, sys=47.74%, ctx=11, majf=0, minf=773 00:13:25.758 IO depths : 1=0.6%, 2=1.3%, 4=3.1%, 8=8.2%, 16=22.5%, 32=62.1%, >=64=2.1% 00:13:25.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.758 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:25.758 issued rwts: total=165621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.758 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:25.758 00:13:25.758 Run status group 0 (all jobs): 00:13:25.758 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=647MiB (678MB), run=5001-5001msec 00:13:26.331 ----------------------------------------------------- 00:13:26.331 Suppressions used: 00:13:26.331 count bytes template 00:13:26.331 1 11 /usr/src/fio/parse.c 00:13:26.331 1 8 libtcmalloc_minimal.so 00:13:26.331 1 904 libcrypto.so 00:13:26.331 ----------------------------------------------------- 00:13:26.331 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:26.331 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:26.332 22:09:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.332 { 00:13:26.332 "subsystems": [ 00:13:26.332 { 00:13:26.332 "subsystem": "bdev", 00:13:26.332 "config": [ 00:13:26.332 { 00:13:26.332 "params": { 00:13:26.332 "io_mechanism": "libaio", 00:13:26.332 "conserve_cpu": true, 00:13:26.332 "filename": "/dev/nvme0n1", 00:13:26.332 "name": "xnvme_bdev" 00:13:26.332 }, 00:13:26.332 "method": "bdev_xnvme_create" 00:13:26.332 }, 00:13:26.332 { 00:13:26.332 "method": "bdev_wait_for_examine" 00:13:26.332 } 00:13:26.332 ] 00:13:26.332 } 00:13:26.332 ] 00:13:26.332 } 00:13:26.332 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:26.332 fio-3.35 00:13:26.332 Starting 1 thread 00:13:32.920 00:13:32.920 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83218: Mon Dec 16 22:09:38 2024 00:13:32.920 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5002msec); 0 zone resets 00:13:32.920 slat (usec): min=4, max=1871, avg=20.01, stdev=91.63 00:13:32.920 clat (usec): min=107, max=6255, avg=1332.41, stdev=502.84 00:13:32.920 lat (usec): min=213, max=6260, avg=1352.42, stdev=493.92 00:13:32.920 clat percentiles (usec): 00:13:32.920 | 1.00th=[ 306], 5.00th=[ 578], 10.00th=[ 734], 20.00th=[ 914], 00:13:32.920 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1319], 60.00th=[ 1434], 00:13:32.920 | 70.00th=[ 1549], 80.00th=[ 1713], 90.00th=[ 1926], 95.00th=[ 2147], 00:13:32.920 | 99.00th=[ 2802], 99.50th=[ 3163], 99.90th=[ 3982], 99.95th=[ 4359], 00:13:32.920 | 99.99th=[ 5538] 00:13:32.920 bw ( KiB/s): min=126656, max=143464, per=99.15%, avg=134759.11, stdev=6105.79, samples=9 00:13:32.920 iops : min=31664, max=35866, avg=33689.78, stdev=1526.45, samples=9 00:13:32.920 lat (usec) : 250=0.52%, 500=2.86%, 750=7.46%, 1000=14.75% 00:13:32.920 lat (msec) : 2=66.39%, 4=7.92%, 10=0.10% 00:13:32.920 cpu : usr=45.89%, sys=45.67%, ctx=8, majf=0, minf=774 00:13:32.920 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.0%, 16=22.3%, 32=62.4%, >=64=2.1% 00:13:32.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.920 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.4%, 64=1.6%, >=64=0.0% 00:13:32.920 issued rwts: total=0,169963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:32.920 00:13:32.920 Run status group 0 (all jobs): 00:13:32.920 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (696MB), run=5002-5002msec 00:13:32.920 ----------------------------------------------------- 00:13:32.920 Suppressions used: 00:13:32.920 count bytes template 00:13:32.920 1 11 /usr/src/fio/parse.c 00:13:32.920 1 8 libtcmalloc_minimal.so 00:13:32.920 1 904 libcrypto.so 00:13:32.920 ----------------------------------------------------- 00:13:32.920 00:13:32.920 00:13:32.920 real 0m12.084s 00:13:32.920 user 0m5.630s 00:13:32.920 sys 0m5.257s 00:13:32.920 22:09:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.920 ************************************ 00:13:32.920 END TEST xnvme_fio_plugin 00:13:32.920 ************************************ 00:13:32.920 22:09:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:32.920 22:09:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:32.920 22:09:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.920 22:09:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.920 22:09:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.920 ************************************ 00:13:32.920 START TEST xnvme_rpc 00:13:32.920 ************************************ 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83304 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83304 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83304 ']' 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.920 22:09:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.920 [2024-12-16 22:09:38.636866] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:32.920 [2024-12-16 22:09:38.637052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83304 ] 00:13:32.920 [2024-12-16 22:09:38.799313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.920 [2024-12-16 22:09:38.827825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.241 xnvme_bdev 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:33.241 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.503 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83304 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83304 ']' 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83304 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83304 00:13:33.504 killing process with pid 83304 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83304' 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83304 00:13:33.504 22:09:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83304 00:13:33.764 00:13:33.764 real 0m1.459s 00:13:33.765 user 0m1.528s 00:13:33.765 sys 0m0.424s 00:13:33.765 22:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.765 ************************************ 00:13:33.765 END TEST xnvme_rpc 00:13:33.765 ************************************ 00:13:33.765 22:09:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.765 22:09:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:33.765 22:09:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.765 22:09:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.765 22:09:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.765 ************************************ 00:13:33.765 START TEST xnvme_bdevperf 00:13:33.765 ************************************ 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:33.765 22:09:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:34.026 { 00:13:34.026 "subsystems": [ 00:13:34.026 { 00:13:34.026 "subsystem": "bdev", 00:13:34.026 "config": [ 00:13:34.026 { 00:13:34.026 "params": { 00:13:34.026 "io_mechanism": "io_uring", 00:13:34.026 "conserve_cpu": false, 00:13:34.026 "filename": "/dev/nvme0n1", 00:13:34.026 "name": "xnvme_bdev" 00:13:34.026 }, 00:13:34.026 "method": "bdev_xnvme_create" 00:13:34.026 }, 00:13:34.026 { 00:13:34.026 "method": "bdev_wait_for_examine" 00:13:34.026 } 00:13:34.026 ] 00:13:34.026 } 00:13:34.026 ] 00:13:34.026 } 00:13:34.026 [2024-12-16 22:09:40.163261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:34.026 [2024-12-16 22:09:40.163421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83356 ] 00:13:34.026 [2024-12-16 22:09:40.330805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.026 [2024-12-16 22:09:40.358632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.287 Running I/O for 5 seconds... 00:13:36.176 32431.00 IOPS, 126.68 MiB/s [2024-12-16T22:09:43.468Z] 31796.50 IOPS, 124.21 MiB/s [2024-12-16T22:09:44.856Z] 31943.00 IOPS, 124.78 MiB/s [2024-12-16T22:09:45.801Z] 32132.75 IOPS, 125.52 MiB/s [2024-12-16T22:09:45.801Z] 32323.20 IOPS, 126.26 MiB/s 00:13:39.454 Latency(us) 00:13:39.454 [2024-12-16T22:09:45.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.454 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:39.454 xnvme_bdev : 5.00 32316.72 126.24 0.00 0.00 1976.54 611.25 5041.23 00:13:39.454 [2024-12-16T22:09:45.801Z] =================================================================================================================== 00:13:39.454 [2024-12-16T22:09:45.801Z] Total : 32316.72 126.24 0.00 0.00 1976.54 611.25 5041.23 00:13:39.454 22:09:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:39.454 22:09:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:39.454 22:09:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:39.454 22:09:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:39.454 22:09:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:39.454 { 00:13:39.454 "subsystems": [ 00:13:39.454 { 00:13:39.454 "subsystem": "bdev", 00:13:39.454 "config": [ 00:13:39.454 { 00:13:39.454 "params": { 00:13:39.454 "io_mechanism": "io_uring", 00:13:39.454 "conserve_cpu": false, 00:13:39.454 "filename": "/dev/nvme0n1", 00:13:39.454 "name": "xnvme_bdev" 00:13:39.454 }, 00:13:39.454 "method": "bdev_xnvme_create" 00:13:39.454 }, 00:13:39.454 { 00:13:39.454 "method": "bdev_wait_for_examine" 00:13:39.454 } 00:13:39.454 ] 00:13:39.454 } 00:13:39.454 ] 00:13:39.454 } 00:13:39.454 [2024-12-16 22:09:45.725774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:39.454 [2024-12-16 22:09:45.725925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83427 ] 00:13:39.716 [2024-12-16 22:09:45.888034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.716 [2024-12-16 22:09:45.917089] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.716 Running I/O for 5 seconds... 00:13:42.047 33362.00 IOPS, 130.32 MiB/s [2024-12-16T22:09:49.339Z] 33410.00 IOPS, 130.51 MiB/s [2024-12-16T22:09:50.283Z] 33146.33 IOPS, 129.48 MiB/s [2024-12-16T22:09:51.227Z] 33070.75 IOPS, 129.18 MiB/s [2024-12-16T22:09:51.227Z] 32909.20 IOPS, 128.55 MiB/s 00:13:44.880 Latency(us) 00:13:44.880 [2024-12-16T22:09:51.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.880 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:44.880 xnvme_bdev : 5.00 32907.22 128.54 0.00 0.00 1940.85 381.24 8570.09 00:13:44.880 [2024-12-16T22:09:51.227Z] =================================================================================================================== 00:13:44.880 [2024-12-16T22:09:51.228Z] Total : 32907.22 128.54 0.00 0.00 1940.85 381.24 8570.09 00:13:44.881 00:13:44.881 real 0m11.133s 00:13:44.881 user 0m4.465s 00:13:44.881 sys 0m6.411s 00:13:44.881 22:09:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.881 ************************************ 00:13:44.881 END TEST xnvme_bdevperf 00:13:44.881 ************************************ 00:13:44.881 22:09:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.143 22:09:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:45.143 22:09:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.143 22:09:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.143 22:09:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.143 ************************************ 00:13:45.143 START TEST xnvme_fio_plugin 00:13:45.143 ************************************ 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:45.143 { 00:13:45.143 "subsystems": [ 00:13:45.143 { 00:13:45.143 "subsystem": "bdev", 00:13:45.143 "config": [ 00:13:45.143 { 00:13:45.143 "params": { 00:13:45.143 "io_mechanism": "io_uring", 00:13:45.143 "conserve_cpu": false, 00:13:45.143 "filename": "/dev/nvme0n1", 00:13:45.143 "name": "xnvme_bdev" 00:13:45.143 }, 00:13:45.143 "method": "bdev_xnvme_create" 00:13:45.143 }, 00:13:45.143 { 00:13:45.143 "method": "bdev_wait_for_examine" 00:13:45.143 } 00:13:45.143 ] 00:13:45.143 } 00:13:45.143 ] 00:13:45.143 } 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:45.143 22:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.404 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:45.404 fio-3.35 00:13:45.404 Starting 1 thread 00:13:50.700 00:13:50.700 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83530: Mon Dec 16 22:09:56 2024 00:13:50.700 read: IOPS=32.0k, BW=125MiB/s (131MB/s)(626MiB/5002msec) 00:13:50.700 slat (nsec): min=2878, max=51755, avg=3420.03, stdev=1695.18 00:13:50.700 clat (usec): min=1119, max=3515, avg=1860.20, stdev=272.45 00:13:50.700 lat (usec): min=1122, max=3542, avg=1863.62, stdev=272.67 00:13:50.700 clat percentiles (usec): 00:13:50.700 | 1.00th=[ 1336], 5.00th=[ 1467], 10.00th=[ 1532], 20.00th=[ 1631], 00:13:50.700 | 30.00th=[ 1696], 40.00th=[ 1762], 50.00th=[ 1844], 60.00th=[ 1909], 00:13:50.700 | 70.00th=[ 1991], 80.00th=[ 2089], 90.00th=[ 2212], 95.00th=[ 2343], 00:13:50.700 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 2966], 99.95th=[ 3064], 00:13:50.700 | 99.99th=[ 3392] 00:13:50.700 bw ( KiB/s): min=126464, max=132096, per=100.00%, avg=128284.67, stdev=1947.10, samples=9 00:13:50.700 iops : min=31616, max=33024, avg=32071.11, stdev=486.83, samples=9 00:13:50.700 lat (msec) : 2=71.58%, 4=28.42% 00:13:50.700 cpu : usr=31.55%, sys=67.31%, ctx=11, majf=0, minf=771 00:13:50.700 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:50.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.700 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:50.700 issued rwts: total=160192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.700 00:13:50.700 Run status group 0 (all jobs): 00:13:50.700 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (656MB), run=5002-5002msec 00:13:51.273 ----------------------------------------------------- 00:13:51.273 Suppressions used: 00:13:51.273 count bytes template 00:13:51.273 1 11 /usr/src/fio/parse.c 00:13:51.273 1 8 libtcmalloc_minimal.so 00:13:51.273 1 904 libcrypto.so 00:13:51.273 ----------------------------------------------------- 00:13:51.273 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.273 22:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.273 { 00:13:51.273 "subsystems": [ 00:13:51.273 { 00:13:51.273 "subsystem": "bdev", 00:13:51.273 "config": [ 00:13:51.273 { 00:13:51.273 "params": { 00:13:51.273 "io_mechanism": "io_uring", 00:13:51.273 "conserve_cpu": false, 00:13:51.273 "filename": "/dev/nvme0n1", 00:13:51.273 "name": "xnvme_bdev" 00:13:51.273 }, 00:13:51.273 "method": "bdev_xnvme_create" 00:13:51.273 }, 00:13:51.273 { 00:13:51.273 "method": "bdev_wait_for_examine" 00:13:51.273 } 00:13:51.273 ] 00:13:51.273 } 00:13:51.273 ] 00:13:51.273 } 00:13:51.273 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.273 fio-3.35 00:13:51.273 Starting 1 thread 00:13:57.866 00:13:57.866 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83611: Mon Dec 16 22:10:02 2024 00:13:57.866 write: IOPS=32.8k, BW=128MiB/s (134MB/s)(640MiB/5001msec); 0 zone resets 00:13:57.866 slat (nsec): min=2889, max=55243, avg=3477.32, stdev=1770.81 00:13:57.866 clat (usec): min=441, max=8532, avg=1810.58, stdev=309.74 00:13:57.866 lat (usec): min=445, max=8536, avg=1814.06, stdev=310.00 00:13:57.866 clat percentiles (usec): 00:13:57.866 | 1.00th=[ 1237], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1565], 00:13:57.866 | 30.00th=[ 1631], 40.00th=[ 1713], 50.00th=[ 1778], 60.00th=[ 1860], 00:13:57.866 | 70.00th=[ 1942], 80.00th=[ 2040], 90.00th=[ 2180], 95.00th=[ 2343], 00:13:57.866 | 99.00th=[ 2704], 99.50th=[ 2900], 99.90th=[ 3294], 99.95th=[ 3556], 00:13:57.866 | 99.99th=[ 6325] 00:13:57.866 bw ( KiB/s): min=126960, max=137672, per=99.75%, avg=130808.67, stdev=3168.24, samples=9 00:13:57.866 iops : min=31740, max=34418, avg=32702.11, stdev=792.10, samples=9 00:13:57.866 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:13:57.866 lat (msec) : 2=76.03%, 4=23.90%, 10=0.02% 00:13:57.866 cpu : usr=32.88%, sys=66.00%, ctx=15, majf=0, minf=772 00:13:57.866 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:57.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.866 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:57.866 issued rwts: total=0,163960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.866 00:13:57.866 Run status group 0 (all jobs): 00:13:57.866 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=640MiB (672MB), run=5001-5001msec 00:13:57.866 ----------------------------------------------------- 00:13:57.866 Suppressions used: 00:13:57.866 count bytes template 00:13:57.866 1 11 /usr/src/fio/parse.c 00:13:57.866 1 8 libtcmalloc_minimal.so 00:13:57.866 1 904 libcrypto.so 00:13:57.866 ----------------------------------------------------- 00:13:57.866 00:13:57.866 00:13:57.866 real 0m12.164s 00:13:57.866 user 0m4.488s 00:13:57.866 sys 0m7.236s 00:13:57.866 22:10:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.866 ************************************ 00:13:57.866 END TEST xnvme_fio_plugin 00:13:57.866 ************************************ 00:13:57.866 22:10:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.866 22:10:03 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:57.866 22:10:03 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:57.866 22:10:03 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:57.866 22:10:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:57.866 22:10:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.866 22:10:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.866 22:10:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.866 ************************************ 00:13:57.866 START TEST xnvme_rpc 00:13:57.866 ************************************ 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83692 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83692 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83692 ']' 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.866 22:10:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:57.866 [2024-12-16 22:10:03.616672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:57.866 [2024-12-16 22:10:03.616814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83692 ] 00:13:57.866 [2024-12-16 22:10:03.775685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.866 [2024-12-16 22:10:03.817494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.128 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.128 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.128 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:58.128 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.128 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 xnvme_bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83692 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83692 ']' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83692 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83692 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.390 killing process with pid 83692 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83692' 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83692 00:13:58.390 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83692 00:13:58.652 00:13:58.652 real 0m1.424s 00:13:58.652 user 0m1.388s 00:13:58.652 sys 0m0.521s 00:13:58.652 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.652 ************************************ 00:13:58.652 END TEST xnvme_rpc 00:13:58.652 22:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.652 ************************************ 00:13:58.913 22:10:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:58.913 22:10:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.913 22:10:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.913 22:10:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.913 ************************************ 00:13:58.913 START TEST xnvme_bdevperf 00:13:58.913 ************************************ 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:58.913 22:10:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.914 { 00:13:58.914 "subsystems": [ 00:13:58.914 { 00:13:58.914 "subsystem": "bdev", 00:13:58.914 "config": [ 00:13:58.914 { 00:13:58.914 "params": { 00:13:58.914 "io_mechanism": "io_uring", 00:13:58.914 "conserve_cpu": true, 00:13:58.914 "filename": "/dev/nvme0n1", 00:13:58.914 "name": "xnvme_bdev" 00:13:58.914 }, 00:13:58.914 "method": "bdev_xnvme_create" 00:13:58.914 }, 00:13:58.914 { 00:13:58.914 "method": "bdev_wait_for_examine" 00:13:58.914 } 00:13:58.914 ] 00:13:58.914 } 00:13:58.914 ] 00:13:58.914 } 00:13:58.914 [2024-12-16 22:10:05.087440] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:58.914 [2024-12-16 22:10:05.087595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83749 ] 00:13:58.914 [2024-12-16 22:10:05.250939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.174 [2024-12-16 22:10:05.280361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.174 Running I/O for 5 seconds... 00:14:01.105 31224.00 IOPS, 121.97 MiB/s [2024-12-16T22:10:08.420Z] 31478.00 IOPS, 122.96 MiB/s [2024-12-16T22:10:09.806Z] 31448.33 IOPS, 122.85 MiB/s [2024-12-16T22:10:10.750Z] 31563.25 IOPS, 123.29 MiB/s [2024-12-16T22:10:10.750Z] 31645.00 IOPS, 123.61 MiB/s 00:14:04.403 Latency(us) 00:14:04.403 [2024-12-16T22:10:10.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.403 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:04.403 xnvme_bdev : 5.00 31641.56 123.60 0.00 0.00 2018.80 1064.96 4965.61 00:14:04.403 [2024-12-16T22:10:10.750Z] =================================================================================================================== 00:14:04.403 [2024-12-16T22:10:10.750Z] Total : 31641.56 123.60 0.00 0.00 2018.80 1064.96 4965.61 00:14:04.403 22:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.403 22:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:04.403 22:10:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:04.403 22:10:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.403 22:10:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.403 { 00:14:04.403 "subsystems": [ 00:14:04.403 { 00:14:04.403 "subsystem": "bdev", 00:14:04.403 "config": [ 00:14:04.403 { 00:14:04.403 "params": { 00:14:04.403 "io_mechanism": "io_uring", 00:14:04.403 "conserve_cpu": true, 00:14:04.403 "filename": "/dev/nvme0n1", 00:14:04.403 "name": "xnvme_bdev" 00:14:04.403 }, 00:14:04.403 "method": "bdev_xnvme_create" 00:14:04.403 }, 00:14:04.403 { 00:14:04.403 "method": "bdev_wait_for_examine" 00:14:04.403 } 00:14:04.403 ] 00:14:04.403 } 00:14:04.403 ] 00:14:04.403 } 00:14:04.403 [2024-12-16 22:10:10.640782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:04.403 [2024-12-16 22:10:10.640943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83820 ] 00:14:04.664 [2024-12-16 22:10:10.803960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.664 [2024-12-16 22:10:10.832901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.664 Running I/O for 5 seconds... 00:14:06.995 33325.00 IOPS, 130.18 MiB/s [2024-12-16T22:10:14.286Z] 32751.00 IOPS, 127.93 MiB/s [2024-12-16T22:10:15.229Z] 32819.00 IOPS, 128.20 MiB/s [2024-12-16T22:10:16.172Z] 32665.00 IOPS, 127.60 MiB/s [2024-12-16T22:10:16.172Z] 32646.60 IOPS, 127.53 MiB/s 00:14:09.825 Latency(us) 00:14:09.825 [2024-12-16T22:10:16.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.825 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:09.825 xnvme_bdev : 5.00 32636.18 127.49 0.00 0.00 1956.99 737.28 6654.42 00:14:09.825 [2024-12-16T22:10:16.172Z] =================================================================================================================== 00:14:09.825 [2024-12-16T22:10:16.172Z] Total : 32636.18 127.49 0.00 0.00 1956.99 737.28 6654.42 00:14:09.825 00:14:09.825 real 0m11.109s 00:14:09.825 user 0m7.197s 00:14:09.825 sys 0m3.412s 00:14:09.825 22:10:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.825 ************************************ 00:14:09.825 END TEST xnvme_bdevperf 00:14:09.825 ************************************ 00:14:09.825 22:10:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.086 22:10:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:10.086 22:10:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.086 22:10:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.086 22:10:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.086 ************************************ 00:14:10.086 START TEST xnvme_fio_plugin 00:14:10.086 ************************************ 00:14:10.086 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:10.086 22:10:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:10.087 22:10:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.087 { 00:14:10.087 "subsystems": [ 00:14:10.087 { 00:14:10.087 "subsystem": "bdev", 00:14:10.087 "config": [ 00:14:10.087 { 00:14:10.087 "params": { 00:14:10.087 "io_mechanism": "io_uring", 00:14:10.087 "conserve_cpu": true, 00:14:10.087 "filename": "/dev/nvme0n1", 00:14:10.087 "name": "xnvme_bdev" 00:14:10.087 }, 00:14:10.087 "method": "bdev_xnvme_create" 00:14:10.087 }, 00:14:10.087 { 00:14:10.087 "method": "bdev_wait_for_examine" 00:14:10.087 } 00:14:10.087 ] 00:14:10.087 } 00:14:10.087 ] 00:14:10.087 } 00:14:10.087 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:10.087 fio-3.35 00:14:10.087 Starting 1 thread 00:14:16.676 00:14:16.676 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83927: Mon Dec 16 22:10:21 2024 00:14:16.676 read: IOPS=31.3k, BW=122MiB/s (128MB/s)(611MiB/5001msec) 00:14:16.676 slat (nsec): min=2857, max=84337, avg=3420.18, stdev=1762.31 00:14:16.677 clat (usec): min=1066, max=3382, avg=1907.98, stdev=289.62 00:14:16.677 lat (usec): min=1069, max=3410, avg=1911.40, stdev=289.91 00:14:16.677 clat percentiles (usec): 00:14:16.677 | 1.00th=[ 1352], 5.00th=[ 1483], 10.00th=[ 1549], 20.00th=[ 1663], 00:14:16.677 | 30.00th=[ 1745], 40.00th=[ 1811], 50.00th=[ 1893], 60.00th=[ 1958], 00:14:16.677 | 70.00th=[ 2040], 80.00th=[ 2147], 90.00th=[ 2278], 95.00th=[ 2409], 00:14:16.677 | 99.00th=[ 2704], 99.50th=[ 2835], 99.90th=[ 3130], 99.95th=[ 3195], 00:14:16.677 | 99.99th=[ 3326] 00:14:16.677 bw ( KiB/s): min=121856, max=126976, per=99.81%, avg=124814.22, stdev=1634.75, samples=9 00:14:16.677 iops : min=30464, max=31744, avg=31203.56, stdev=408.69, samples=9 00:14:16.677 lat (msec) : 2=64.80%, 4=35.20% 00:14:16.677 cpu : usr=65.52%, sys=30.94%, ctx=8, majf=0, minf=771 00:14:16.677 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:16.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.677 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:16.677 issued rwts: total=156352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:16.677 00:14:16.677 Run status group 0 (all jobs): 00:14:16.677 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=611MiB (640MB), run=5001-5001msec 00:14:16.677 ----------------------------------------------------- 00:14:16.677 Suppressions used: 00:14:16.677 count bytes template 00:14:16.677 1 11 /usr/src/fio/parse.c 00:14:16.677 1 8 libtcmalloc_minimal.so 00:14:16.677 1 904 libcrypto.so 00:14:16.677 ----------------------------------------------------- 00:14:16.677 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:16.677 22:10:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.677 { 00:14:16.677 "subsystems": [ 00:14:16.677 { 00:14:16.677 "subsystem": "bdev", 00:14:16.677 "config": [ 00:14:16.677 { 00:14:16.677 "params": { 00:14:16.677 "io_mechanism": "io_uring", 00:14:16.677 "conserve_cpu": true, 00:14:16.677 "filename": "/dev/nvme0n1", 00:14:16.677 "name": "xnvme_bdev" 00:14:16.677 }, 00:14:16.677 "method": "bdev_xnvme_create" 00:14:16.677 }, 00:14:16.677 { 00:14:16.677 "method": "bdev_wait_for_examine" 00:14:16.677 } 00:14:16.677 ] 00:14:16.677 } 00:14:16.677 ] 00:14:16.677 } 00:14:16.677 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:16.677 fio-3.35 00:14:16.677 Starting 1 thread 00:14:21.966 00:14:21.966 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84009: Mon Dec 16 22:10:27 2024 00:14:21.966 write: IOPS=32.2k, BW=126MiB/s (132MB/s)(629MiB/5001msec); 0 zone resets 00:14:21.966 slat (nsec): min=2901, max=88260, avg=3574.94, stdev=1768.45 00:14:21.966 clat (usec): min=1153, max=6863, avg=1845.68, stdev=283.16 00:14:21.966 lat (usec): min=1156, max=6866, avg=1849.26, stdev=283.44 00:14:21.966 clat percentiles (usec): 00:14:21.966 | 1.00th=[ 1319], 5.00th=[ 1434], 10.00th=[ 1500], 20.00th=[ 1598], 00:14:21.966 | 30.00th=[ 1680], 40.00th=[ 1745], 50.00th=[ 1827], 60.00th=[ 1893], 00:14:21.966 | 70.00th=[ 1975], 80.00th=[ 2073], 90.00th=[ 2212], 95.00th=[ 2343], 00:14:21.966 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3523], 00:14:21.966 | 99.99th=[ 4228] 00:14:21.966 bw ( KiB/s): min=126736, max=133072, per=100.00%, avg=129137.78, stdev=2229.84, samples=9 00:14:21.967 iops : min=31684, max=33268, avg=32284.44, stdev=557.46, samples=9 00:14:21.967 lat (msec) : 2=72.90%, 4=27.08%, 10=0.02% 00:14:21.967 cpu : usr=68.04%, sys=28.66%, ctx=11, majf=0, minf=772 00:14:21.967 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:21.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:21.967 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:21.967 issued rwts: total=0,160928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:21.967 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:21.967 00:14:21.967 Run status group 0 (all jobs): 00:14:21.967 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=629MiB (659MB), run=5001-5001msec 00:14:21.967 ----------------------------------------------------- 00:14:21.967 Suppressions used: 00:14:21.967 count bytes template 00:14:21.967 1 11 /usr/src/fio/parse.c 00:14:21.967 1 8 libtcmalloc_minimal.so 00:14:21.967 1 904 libcrypto.so 00:14:21.967 ----------------------------------------------------- 00:14:21.967 00:14:21.967 00:14:21.967 real 0m12.002s 00:14:21.967 user 0m7.788s 00:14:21.967 sys 0m3.564s 00:14:21.967 22:10:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.967 ************************************ 00:14:21.967 END TEST xnvme_fio_plugin 00:14:21.967 ************************************ 00:14:21.967 22:10:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:21.967 22:10:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:21.967 22:10:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.967 22:10:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.967 22:10:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.967 ************************************ 00:14:21.967 START TEST xnvme_rpc 00:14:21.967 ************************************ 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84084 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84084 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84084 ']' 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:21.967 22:10:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.228 [2024-12-16 22:10:28.348568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:22.228 [2024-12-16 22:10:28.348718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84084 ] 00:14:22.228 [2024-12-16 22:10:28.511815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.228 [2024-12-16 22:10:28.540553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 xnvme_bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84084 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84084 ']' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84084 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84084 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.172 killing process with pid 84084 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84084' 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84084 00:14:23.172 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84084 00:14:23.433 00:14:23.433 real 0m1.426s 00:14:23.433 user 0m1.487s 00:14:23.433 sys 0m0.428s 00:14:23.433 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.433 ************************************ 00:14:23.433 END TEST xnvme_rpc 00:14:23.433 ************************************ 00:14:23.433 22:10:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.434 22:10:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:23.434 22:10:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.434 22:10:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.434 22:10:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.434 ************************************ 00:14:23.434 START TEST xnvme_bdevperf 00:14:23.434 ************************************ 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.434 22:10:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.695 { 00:14:23.695 "subsystems": [ 00:14:23.695 { 00:14:23.695 "subsystem": "bdev", 00:14:23.695 "config": [ 00:14:23.695 { 00:14:23.695 "params": { 00:14:23.695 "io_mechanism": "io_uring_cmd", 00:14:23.695 "conserve_cpu": false, 00:14:23.695 "filename": "/dev/ng0n1", 00:14:23.695 "name": "xnvme_bdev" 00:14:23.695 }, 00:14:23.695 "method": "bdev_xnvme_create" 00:14:23.695 }, 00:14:23.695 { 00:14:23.695 "method": "bdev_wait_for_examine" 00:14:23.695 } 00:14:23.695 ] 00:14:23.695 } 00:14:23.695 ] 00:14:23.695 } 00:14:23.695 [2024-12-16 22:10:29.827376] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:23.695 [2024-12-16 22:10:29.827505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84136 ] 00:14:23.695 [2024-12-16 22:10:29.990810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.695 [2024-12-16 22:10:30.020595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.956 Running I/O for 5 seconds... 00:14:25.843 32576.00 IOPS, 127.25 MiB/s [2024-12-16T22:10:33.133Z] 32480.00 IOPS, 126.88 MiB/s [2024-12-16T22:10:34.517Z] 32789.33 IOPS, 128.08 MiB/s [2024-12-16T22:10:35.459Z] 32784.00 IOPS, 128.06 MiB/s 00:14:29.112 Latency(us) 00:14:29.112 [2024-12-16T22:10:35.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.112 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:29.112 xnvme_bdev : 5.00 32544.96 127.13 0.00 0.00 1962.80 636.46 4612.73 00:14:29.112 [2024-12-16T22:10:35.459Z] =================================================================================================================== 00:14:29.112 [2024-12-16T22:10:35.459Z] Total : 32544.96 127.13 0.00 0.00 1962.80 636.46 4612.73 00:14:29.112 22:10:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.112 22:10:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:29.112 22:10:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.112 22:10:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.112 22:10:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.112 { 00:14:29.112 "subsystems": [ 00:14:29.112 { 00:14:29.112 "subsystem": "bdev", 00:14:29.112 "config": [ 00:14:29.112 { 00:14:29.112 "params": { 00:14:29.112 "io_mechanism": "io_uring_cmd", 00:14:29.112 "conserve_cpu": false, 00:14:29.112 "filename": "/dev/ng0n1", 00:14:29.112 "name": "xnvme_bdev" 00:14:29.112 }, 00:14:29.112 "method": "bdev_xnvme_create" 00:14:29.112 }, 00:14:29.112 { 00:14:29.112 "method": "bdev_wait_for_examine" 00:14:29.112 } 00:14:29.112 ] 00:14:29.112 } 00:14:29.112 ] 00:14:29.112 } 00:14:29.112 [2024-12-16 22:10:35.379170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:29.113 [2024-12-16 22:10:35.379310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84209 ] 00:14:29.373 [2024-12-16 22:10:35.542600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.373 [2024-12-16 22:10:35.563663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.373 Running I/O for 5 seconds... 00:14:31.328 38255.00 IOPS, 149.43 MiB/s [2024-12-16T22:10:39.057Z] 35783.00 IOPS, 139.78 MiB/s [2024-12-16T22:10:40.002Z] 35770.67 IOPS, 139.73 MiB/s [2024-12-16T22:10:40.945Z] 35978.25 IOPS, 140.54 MiB/s [2024-12-16T22:10:40.945Z] 35425.60 IOPS, 138.38 MiB/s 00:14:34.598 Latency(us) 00:14:34.598 [2024-12-16T22:10:40.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.598 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:34.598 xnvme_bdev : 5.00 35424.90 138.38 0.00 0.00 1802.73 368.64 3856.54 00:14:34.599 [2024-12-16T22:10:40.946Z] =================================================================================================================== 00:14:34.599 [2024-12-16T22:10:40.946Z] Total : 35424.90 138.38 0.00 0.00 1802.73 368.64 3856.54 00:14:34.599 22:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.599 22:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:34.599 22:10:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:34.599 22:10:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:34.599 22:10:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.599 { 00:14:34.599 "subsystems": [ 00:14:34.599 { 00:14:34.599 "subsystem": "bdev", 00:14:34.599 "config": [ 00:14:34.599 { 00:14:34.599 "params": { 00:14:34.599 "io_mechanism": "io_uring_cmd", 00:14:34.599 "conserve_cpu": false, 00:14:34.599 "filename": "/dev/ng0n1", 00:14:34.599 "name": "xnvme_bdev" 00:14:34.599 }, 00:14:34.599 "method": "bdev_xnvme_create" 00:14:34.599 }, 00:14:34.599 { 00:14:34.599 "method": "bdev_wait_for_examine" 00:14:34.599 } 00:14:34.599 ] 00:14:34.599 } 00:14:34.599 ] 00:14:34.599 } 00:14:34.599 [2024-12-16 22:10:40.898431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:34.599 [2024-12-16 22:10:40.898572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84273 ] 00:14:34.859 [2024-12-16 22:10:41.051267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.859 [2024-12-16 22:10:41.079698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.859 Running I/O for 5 seconds... 00:14:37.186 79424.00 IOPS, 310.25 MiB/s [2024-12-16T22:10:44.476Z] 79584.00 IOPS, 310.88 MiB/s [2024-12-16T22:10:45.417Z] 79210.67 IOPS, 309.42 MiB/s [2024-12-16T22:10:46.358Z] 79184.00 IOPS, 309.31 MiB/s [2024-12-16T22:10:46.358Z] 79705.60 IOPS, 311.35 MiB/s 00:14:40.011 Latency(us) 00:14:40.011 [2024-12-16T22:10:46.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.011 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:40.011 xnvme_bdev : 5.00 79664.47 311.19 0.00 0.00 800.02 482.07 2583.63 00:14:40.011 [2024-12-16T22:10:46.358Z] =================================================================================================================== 00:14:40.011 [2024-12-16T22:10:46.358Z] Total : 79664.47 311.19 0.00 0.00 800.02 482.07 2583.63 00:14:40.011 22:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:40.011 22:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:40.011 22:10:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:40.011 22:10:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:40.011 22:10:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.011 { 00:14:40.011 "subsystems": [ 00:14:40.011 { 00:14:40.012 "subsystem": "bdev", 00:14:40.012 "config": [ 00:14:40.012 { 00:14:40.012 "params": { 00:14:40.012 "io_mechanism": "io_uring_cmd", 00:14:40.012 "conserve_cpu": false, 00:14:40.012 "filename": "/dev/ng0n1", 00:14:40.012 "name": "xnvme_bdev" 00:14:40.012 }, 00:14:40.012 "method": "bdev_xnvme_create" 00:14:40.012 }, 00:14:40.012 { 00:14:40.012 "method": "bdev_wait_for_examine" 00:14:40.012 } 00:14:40.012 ] 00:14:40.012 } 00:14:40.012 ] 00:14:40.012 } 00:14:40.272 [2024-12-16 22:10:46.377579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:40.273 [2024-12-16 22:10:46.377696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84343 ] 00:14:40.273 [2024-12-16 22:10:46.530962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.273 [2024-12-16 22:10:46.550530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.273 Running I/O for 5 seconds... 00:14:42.600 53387.00 IOPS, 208.54 MiB/s [2024-12-16T22:10:49.890Z] 48113.50 IOPS, 187.94 MiB/s [2024-12-16T22:10:50.834Z] 45700.33 IOPS, 178.52 MiB/s [2024-12-16T22:10:51.777Z] 43905.75 IOPS, 171.51 MiB/s [2024-12-16T22:10:51.777Z] 42793.00 IOPS, 167.16 MiB/s 00:14:45.430 Latency(us) 00:14:45.430 [2024-12-16T22:10:51.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.430 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:45.430 xnvme_bdev : 5.00 42769.44 167.07 0.00 0.00 1492.14 195.35 18450.90 00:14:45.430 [2024-12-16T22:10:51.777Z] =================================================================================================================== 00:14:45.430 [2024-12-16T22:10:51.777Z] Total : 42769.44 167.07 0.00 0.00 1492.14 195.35 18450.90 00:14:45.691 00:14:45.691 real 0m22.042s 00:14:45.691 user 0m10.588s 00:14:45.691 sys 0m10.989s 00:14:45.691 22:10:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.691 ************************************ 00:14:45.691 END TEST xnvme_bdevperf 00:14:45.691 ************************************ 00:14:45.691 22:10:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.691 22:10:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:45.691 22:10:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.691 22:10:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.691 22:10:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.691 ************************************ 00:14:45.691 START TEST xnvme_fio_plugin 00:14:45.691 ************************************ 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:45.691 22:10:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.691 { 00:14:45.691 "subsystems": [ 00:14:45.691 { 00:14:45.691 "subsystem": "bdev", 00:14:45.691 "config": [ 00:14:45.691 { 00:14:45.691 "params": { 00:14:45.691 "io_mechanism": "io_uring_cmd", 00:14:45.691 "conserve_cpu": false, 00:14:45.691 "filename": "/dev/ng0n1", 00:14:45.691 "name": "xnvme_bdev" 00:14:45.691 }, 00:14:45.691 "method": "bdev_xnvme_create" 00:14:45.691 }, 00:14:45.691 { 00:14:45.691 "method": "bdev_wait_for_examine" 00:14:45.691 } 00:14:45.691 ] 00:14:45.691 } 00:14:45.691 ] 00:14:45.691 } 00:14:45.952 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:45.952 fio-3.35 00:14:45.952 Starting 1 thread 00:14:51.240 00:14:51.240 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84446: Mon Dec 16 22:10:57 2024 00:14:51.240 read: IOPS=35.7k, BW=139MiB/s (146MB/s)(698MiB/5002msec) 00:14:51.240 slat (nsec): min=2875, max=64219, avg=3368.02, stdev=1574.64 00:14:51.240 clat (usec): min=877, max=3703, avg=1656.87, stdev=312.32 00:14:51.240 lat (usec): min=880, max=3717, avg=1660.24, stdev=312.61 00:14:51.240 clat percentiles (usec): 00:14:51.240 | 1.00th=[ 1090], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1369], 00:14:51.240 | 30.00th=[ 1467], 40.00th=[ 1565], 50.00th=[ 1647], 60.00th=[ 1713], 00:14:51.240 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2073], 95.00th=[ 2180], 00:14:51.240 | 99.00th=[ 2442], 99.50th=[ 2507], 99.90th=[ 2769], 99.95th=[ 3392], 00:14:51.240 | 99.99th=[ 3654] 00:14:51.240 bw ( KiB/s): min=130048, max=166560, per=98.92%, avg=141293.89, stdev=13464.53, samples=9 00:14:51.240 iops : min=32512, max=41644, avg=35323.89, stdev=3367.02, samples=9 00:14:51.240 lat (usec) : 1000=0.08% 00:14:51.240 lat (msec) : 2=85.58%, 4=14.34% 00:14:51.240 cpu : usr=38.11%, sys=60.71%, ctx=36, majf=0, minf=771 00:14:51.240 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:51.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.241 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:51.241 issued rwts: total=178624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.241 00:14:51.241 Run status group 0 (all jobs): 00:14:51.241 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=698MiB (732MB), run=5002-5002msec 00:14:51.813 ----------------------------------------------------- 00:14:51.813 Suppressions used: 00:14:51.813 count bytes template 00:14:51.813 1 11 /usr/src/fio/parse.c 00:14:51.813 1 8 libtcmalloc_minimal.so 00:14:51.813 1 904 libcrypto.so 00:14:51.813 ----------------------------------------------------- 00:14:51.813 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.813 22:10:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.813 { 00:14:51.813 "subsystems": [ 00:14:51.813 { 00:14:51.813 "subsystem": "bdev", 00:14:51.813 "config": [ 00:14:51.813 { 00:14:51.813 "params": { 00:14:51.813 "io_mechanism": "io_uring_cmd", 00:14:51.813 "conserve_cpu": false, 00:14:51.813 "filename": "/dev/ng0n1", 00:14:51.813 "name": "xnvme_bdev" 00:14:51.813 }, 00:14:51.813 "method": "bdev_xnvme_create" 00:14:51.813 }, 00:14:51.813 { 00:14:51.813 "method": "bdev_wait_for_examine" 00:14:51.813 } 00:14:51.813 ] 00:14:51.813 } 00:14:51.813 ] 00:14:51.813 } 00:14:51.813 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.813 fio-3.35 00:14:51.813 Starting 1 thread 00:14:58.464 00:14:58.464 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84525: Mon Dec 16 22:11:03 2024 00:14:58.464 write: IOPS=38.5k, BW=150MiB/s (158MB/s)(752MiB/5001msec); 0 zone resets 00:14:58.464 slat (nsec): min=2895, max=73618, avg=3530.14, stdev=1566.80 00:14:58.464 clat (usec): min=258, max=7317, avg=1525.17, stdev=295.36 00:14:58.464 lat (usec): min=266, max=7320, avg=1528.70, stdev=295.68 00:14:58.464 clat percentiles (usec): 00:14:58.464 | 1.00th=[ 1029], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1287], 00:14:58.464 | 30.00th=[ 1352], 40.00th=[ 1418], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:58.464 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1893], 95.00th=[ 2040], 00:14:58.464 | 99.00th=[ 2409], 99.50th=[ 2606], 99.90th=[ 3458], 99.95th=[ 3720], 00:14:58.464 | 99.99th=[ 5473] 00:14:58.464 bw ( KiB/s): min=136072, max=169992, per=98.99%, avg=152464.89, stdev=15054.20, samples=9 00:14:58.464 iops : min=34018, max=42498, avg=38116.22, stdev=3763.55, samples=9 00:14:58.464 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.71% 00:14:58.464 lat (msec) : 2=93.37%, 4=5.82%, 10=0.03% 00:14:58.464 cpu : usr=40.48%, sys=58.50%, ctx=7, majf=0, minf=772 00:14:58.464 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.0%, 16=24.3%, 32=51.6%, >=64=1.7% 00:14:58.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.464 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:58.464 issued rwts: total=0,192562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.464 00:14:58.464 Run status group 0 (all jobs): 00:14:58.464 WRITE: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=752MiB (789MB), run=5001-5001msec 00:14:58.464 ----------------------------------------------------- 00:14:58.464 Suppressions used: 00:14:58.464 count bytes template 00:14:58.464 1 11 /usr/src/fio/parse.c 00:14:58.464 1 8 libtcmalloc_minimal.so 00:14:58.464 1 904 libcrypto.so 00:14:58.464 ----------------------------------------------------- 00:14:58.464 00:14:58.464 00:14:58.464 real 0m12.057s 00:14:58.464 user 0m5.123s 00:14:58.464 sys 0m6.505s 00:14:58.464 22:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.464 22:11:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 ************************************ 00:14:58.464 END TEST xnvme_fio_plugin 00:14:58.464 ************************************ 00:14:58.464 22:11:03 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:58.464 22:11:03 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:58.464 22:11:03 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:58.464 22:11:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:58.464 22:11:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.464 22:11:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.464 22:11:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 ************************************ 00:14:58.464 START TEST xnvme_rpc 00:14:58.464 ************************************ 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84605 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84605 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84605 ']' 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:58.464 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 [2024-12-16 22:11:04.092170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:58.464 [2024-12-16 22:11:04.092322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84605 ] 00:14:58.464 [2024-12-16 22:11:04.255615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.464 [2024-12-16 22:11:04.284103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.725 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.725 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:58.725 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:58.725 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.725 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.725 xnvme_bdev 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.726 22:11:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.726 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84605 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84605 ']' 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84605 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84605 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.986 killing process with pid 84605 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84605' 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84605 00:14:58.986 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84605 00:14:59.246 00:14:59.246 real 0m1.394s 00:14:59.246 user 0m1.440s 00:14:59.246 sys 0m0.422s 00:14:59.246 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.246 ************************************ 00:14:59.246 END TEST xnvme_rpc 00:14:59.246 ************************************ 00:14:59.246 22:11:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.246 22:11:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:59.246 22:11:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.246 22:11:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.246 22:11:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.246 ************************************ 00:14:59.246 START TEST xnvme_bdevperf 00:14:59.246 ************************************ 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:59.246 22:11:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.246 { 00:14:59.246 "subsystems": [ 00:14:59.246 { 00:14:59.246 "subsystem": "bdev", 00:14:59.246 "config": [ 00:14:59.246 { 00:14:59.246 "params": { 00:14:59.246 "io_mechanism": "io_uring_cmd", 00:14:59.246 "conserve_cpu": true, 00:14:59.246 "filename": "/dev/ng0n1", 00:14:59.246 "name": "xnvme_bdev" 00:14:59.246 }, 00:14:59.246 "method": "bdev_xnvme_create" 00:14:59.246 }, 00:14:59.246 { 00:14:59.246 "method": "bdev_wait_for_examine" 00:14:59.246 } 00:14:59.246 ] 00:14:59.246 } 00:14:59.246 ] 00:14:59.246 } 00:14:59.246 [2024-12-16 22:11:05.532211] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:59.246 [2024-12-16 22:11:05.532357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84657 ] 00:14:59.507 [2024-12-16 22:11:05.693187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.507 [2024-12-16 22:11:05.721555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.507 Running I/O for 5 seconds... 00:15:01.840 33664.00 IOPS, 131.50 MiB/s [2024-12-16T22:11:09.129Z] 34047.50 IOPS, 133.00 MiB/s [2024-12-16T22:11:10.072Z] 33738.67 IOPS, 131.79 MiB/s [2024-12-16T22:11:11.018Z] 34592.50 IOPS, 135.13 MiB/s 00:15:04.671 Latency(us) 00:15:04.671 [2024-12-16T22:11:11.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.671 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:04.671 xnvme_bdev : 5.00 34698.33 135.54 0.00 0.00 1840.54 964.14 5469.74 00:15:04.671 [2024-12-16T22:11:11.018Z] =================================================================================================================== 00:15:04.671 [2024-12-16T22:11:11.018Z] Total : 34698.33 135.54 0.00 0.00 1840.54 964.14 5469.74 00:15:04.671 22:11:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:04.671 22:11:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:04.671 22:11:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:04.671 22:11:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:04.671 22:11:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:04.933 { 00:15:04.933 "subsystems": [ 00:15:04.933 { 00:15:04.933 "subsystem": "bdev", 00:15:04.933 "config": [ 00:15:04.933 { 00:15:04.933 "params": { 00:15:04.933 "io_mechanism": "io_uring_cmd", 00:15:04.933 "conserve_cpu": true, 00:15:04.933 "filename": "/dev/ng0n1", 00:15:04.933 "name": "xnvme_bdev" 00:15:04.933 }, 00:15:04.933 "method": "bdev_xnvme_create" 00:15:04.933 }, 00:15:04.933 { 00:15:04.933 "method": "bdev_wait_for_examine" 00:15:04.933 } 00:15:04.933 ] 00:15:04.933 } 00:15:04.933 ] 00:15:04.933 } 00:15:04.933 [2024-12-16 22:11:11.082096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:04.933 [2024-12-16 22:11:11.082234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84726 ] 00:15:04.933 [2024-12-16 22:11:11.246812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.933 [2024-12-16 22:11:11.276012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.193 Running I/O for 5 seconds... 00:15:07.080 40948.00 IOPS, 159.95 MiB/s [2024-12-16T22:11:14.815Z] 38857.50 IOPS, 151.79 MiB/s [2024-12-16T22:11:15.388Z] 37751.00 IOPS, 147.46 MiB/s [2024-12-16T22:11:16.775Z] 36999.00 IOPS, 144.53 MiB/s 00:15:10.428 Latency(us) 00:15:10.428 [2024-12-16T22:11:16.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.428 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:10.428 xnvme_bdev : 5.00 36696.14 143.34 0.00 0.00 1739.99 658.51 6024.27 00:15:10.428 [2024-12-16T22:11:16.775Z] =================================================================================================================== 00:15:10.428 [2024-12-16T22:11:16.775Z] Total : 36696.14 143.34 0.00 0.00 1739.99 658.51 6024.27 00:15:10.428 22:11:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.428 22:11:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:10.428 22:11:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:10.428 22:11:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:10.428 22:11:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:10.428 { 00:15:10.428 "subsystems": [ 00:15:10.428 { 00:15:10.428 "subsystem": "bdev", 00:15:10.428 "config": [ 00:15:10.428 { 00:15:10.428 "params": { 00:15:10.428 "io_mechanism": "io_uring_cmd", 00:15:10.428 "conserve_cpu": true, 00:15:10.428 "filename": "/dev/ng0n1", 00:15:10.428 "name": "xnvme_bdev" 00:15:10.428 }, 00:15:10.428 "method": "bdev_xnvme_create" 00:15:10.428 }, 00:15:10.428 { 00:15:10.428 "method": "bdev_wait_for_examine" 00:15:10.428 } 00:15:10.428 ] 00:15:10.428 } 00:15:10.428 ] 00:15:10.428 } 00:15:10.428 [2024-12-16 22:11:16.636474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:10.428 [2024-12-16 22:11:16.636620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84789 ] 00:15:10.689 [2024-12-16 22:11:16.796671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.689 [2024-12-16 22:11:16.825247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.689 Running I/O for 5 seconds... 00:15:13.022 80320.00 IOPS, 313.75 MiB/s [2024-12-16T22:11:19.941Z] 80512.00 IOPS, 314.50 MiB/s [2024-12-16T22:11:21.327Z] 80362.67 IOPS, 313.92 MiB/s [2024-12-16T22:11:22.269Z] 80416.00 IOPS, 314.12 MiB/s 00:15:15.922 Latency(us) 00:15:15.922 [2024-12-16T22:11:22.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.922 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:15.922 xnvme_bdev : 5.00 79957.04 312.33 0.00 0.00 797.02 389.12 7864.32 00:15:15.922 [2024-12-16T22:11:22.269Z] =================================================================================================================== 00:15:15.922 [2024-12-16T22:11:22.269Z] Total : 79957.04 312.33 0.00 0.00 797.02 389.12 7864.32 00:15:15.922 22:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.922 22:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:15.922 22:11:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.922 22:11:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.922 22:11:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.922 { 00:15:15.922 "subsystems": [ 00:15:15.922 { 00:15:15.922 "subsystem": "bdev", 00:15:15.922 "config": [ 00:15:15.922 { 00:15:15.922 "params": { 00:15:15.922 "io_mechanism": "io_uring_cmd", 00:15:15.922 "conserve_cpu": true, 00:15:15.922 "filename": "/dev/ng0n1", 00:15:15.922 "name": "xnvme_bdev" 00:15:15.922 }, 00:15:15.922 "method": "bdev_xnvme_create" 00:15:15.922 }, 00:15:15.922 { 00:15:15.922 "method": "bdev_wait_for_examine" 00:15:15.922 } 00:15:15.922 ] 00:15:15.922 } 00:15:15.922 ] 00:15:15.922 } 00:15:15.922 [2024-12-16 22:11:22.182994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:15.922 [2024-12-16 22:11:22.183138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84852 ] 00:15:16.183 [2024-12-16 22:11:22.345890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.183 [2024-12-16 22:11:22.374390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.183 Running I/O for 5 seconds... 00:15:18.139 42777.00 IOPS, 167.10 MiB/s [2024-12-16T22:11:25.869Z] 41761.00 IOPS, 163.13 MiB/s [2024-12-16T22:11:26.808Z] 41312.33 IOPS, 161.38 MiB/s [2024-12-16T22:11:27.843Z] 40896.25 IOPS, 159.75 MiB/s [2024-12-16T22:11:27.843Z] 40238.00 IOPS, 157.18 MiB/s 00:15:21.496 Latency(us) 00:15:21.496 [2024-12-16T22:11:27.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.496 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:21.496 xnvme_bdev : 5.00 40215.48 157.09 0.00 0.00 1586.10 106.34 23088.84 00:15:21.496 [2024-12-16T22:11:27.843Z] =================================================================================================================== 00:15:21.496 [2024-12-16T22:11:27.843Z] Total : 40215.48 157.09 0.00 0.00 1586.10 106.34 23088.84 00:15:21.496 00:15:21.496 real 0m22.282s 00:15:21.496 user 0m14.616s 00:15:21.496 sys 0m5.590s 00:15:21.496 22:11:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.496 ************************************ 00:15:21.496 END TEST xnvme_bdevperf 00:15:21.496 ************************************ 00:15:21.496 22:11:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.496 22:11:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:21.496 22:11:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.496 22:11:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.496 22:11:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.496 ************************************ 00:15:21.496 START TEST xnvme_fio_plugin 00:15:21.496 ************************************ 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.496 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:21.757 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.757 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.757 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:21.757 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.757 22:11:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:21.757 { 00:15:21.757 "subsystems": [ 00:15:21.757 { 00:15:21.757 "subsystem": "bdev", 00:15:21.757 "config": [ 00:15:21.757 { 00:15:21.757 "params": { 00:15:21.757 "io_mechanism": "io_uring_cmd", 00:15:21.757 "conserve_cpu": true, 00:15:21.757 "filename": "/dev/ng0n1", 00:15:21.757 "name": "xnvme_bdev" 00:15:21.757 }, 00:15:21.757 "method": "bdev_xnvme_create" 00:15:21.757 }, 00:15:21.757 { 00:15:21.757 "method": "bdev_wait_for_examine" 00:15:21.757 } 00:15:21.757 ] 00:15:21.757 } 00:15:21.757 ] 00:15:21.757 } 00:15:21.757 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:21.757 fio-3.35 00:15:21.757 Starting 1 thread 00:15:28.348 00:15:28.348 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84966: Mon Dec 16 22:11:33 2024 00:15:28.348 read: IOPS=36.0k, BW=141MiB/s (147MB/s)(703MiB/5001msec) 00:15:28.348 slat (usec): min=2, max=108, avg= 3.87, stdev= 2.09 00:15:28.348 clat (usec): min=942, max=3842, avg=1620.36, stdev=246.68 00:15:28.348 lat (usec): min=945, max=3882, avg=1624.23, stdev=247.07 00:15:28.348 clat percentiles (usec): 00:15:28.348 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1418], 00:15:28.348 | 30.00th=[ 1483], 40.00th=[ 1532], 50.00th=[ 1582], 60.00th=[ 1647], 00:15:28.348 | 70.00th=[ 1713], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2073], 00:15:28.348 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2671], 99.95th=[ 2802], 00:15:28.348 | 99.99th=[ 3654] 00:15:28.348 bw ( KiB/s): min=139776, max=151040, per=99.95%, avg=143872.00, stdev=3453.63, samples=9 00:15:28.348 iops : min=34944, max=37760, avg=35968.00, stdev=863.41, samples=9 00:15:28.348 lat (usec) : 1000=0.03% 00:15:28.348 lat (msec) : 2=92.28%, 4=7.69% 00:15:28.348 cpu : usr=47.70%, sys=48.76%, ctx=10, majf=0, minf=771 00:15:28.348 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:28.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.348 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:28.348 issued rwts: total=179968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:28.348 00:15:28.348 Run status group 0 (all jobs): 00:15:28.348 READ: bw=141MiB/s (147MB/s), 141MiB/s-141MiB/s (147MB/s-147MB/s), io=703MiB (737MB), run=5001-5001msec 00:15:28.348 ----------------------------------------------------- 00:15:28.348 Suppressions used: 00:15:28.348 count bytes template 00:15:28.348 1 11 /usr/src/fio/parse.c 00:15:28.348 1 8 libtcmalloc_minimal.so 00:15:28.348 1 904 libcrypto.so 00:15:28.348 ----------------------------------------------------- 00:15:28.348 00:15:28.348 22:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.348 22:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.348 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.348 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.349 22:11:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.349 22:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.349 22:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.349 22:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:28.349 22:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.349 22:11:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.349 { 00:15:28.349 "subsystems": [ 00:15:28.349 { 00:15:28.349 "subsystem": "bdev", 00:15:28.349 "config": [ 00:15:28.349 { 00:15:28.349 "params": { 00:15:28.349 "io_mechanism": "io_uring_cmd", 00:15:28.349 "conserve_cpu": true, 00:15:28.349 "filename": "/dev/ng0n1", 00:15:28.349 "name": "xnvme_bdev" 00:15:28.349 }, 00:15:28.349 "method": "bdev_xnvme_create" 00:15:28.349 }, 00:15:28.349 { 00:15:28.349 "method": "bdev_wait_for_examine" 00:15:28.349 } 00:15:28.349 ] 00:15:28.349 } 00:15:28.349 ] 00:15:28.349 } 00:15:28.349 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:28.349 fio-3.35 00:15:28.349 Starting 1 thread 00:15:33.645 00:15:33.645 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=85047: Mon Dec 16 22:11:39 2024 00:15:33.645 write: IOPS=38.8k, BW=151MiB/s (159MB/s)(757MiB/5001msec); 0 zone resets 00:15:33.645 slat (usec): min=2, max=136, avg= 4.06, stdev= 2.20 00:15:33.645 clat (usec): min=794, max=5211, avg=1489.57, stdev=291.45 00:15:33.645 lat (usec): min=800, max=5217, avg=1493.63, stdev=292.04 00:15:33.645 clat percentiles (usec): 00:15:33.645 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:15:33.645 | 30.00th=[ 1319], 40.00th=[ 1385], 50.00th=[ 1467], 60.00th=[ 1532], 00:15:33.645 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1860], 95.00th=[ 1991], 00:15:33.645 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3097], 99.95th=[ 3687], 00:15:33.645 | 99.99th=[ 4883] 00:15:33.645 bw ( KiB/s): min=143248, max=183624, per=100.00%, avg=156140.22, stdev=13636.87, samples=9 00:15:33.645 iops : min=35812, max=45906, avg=39034.89, stdev=3409.22, samples=9 00:15:33.645 lat (usec) : 1000=0.52% 00:15:33.645 lat (msec) : 2=94.81%, 4=4.64%, 10=0.04% 00:15:33.645 cpu : usr=57.24%, sys=38.16%, ctx=13, majf=0, minf=772 00:15:33.645 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:33.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.645 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:33.645 issued rwts: total=0,193832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:33.645 00:15:33.645 Run status group 0 (all jobs): 00:15:33.645 WRITE: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=757MiB (794MB), run=5001-5001msec 00:15:33.645 ----------------------------------------------------- 00:15:33.645 Suppressions used: 00:15:33.645 count bytes template 00:15:33.645 1 11 /usr/src/fio/parse.c 00:15:33.645 1 8 libtcmalloc_minimal.so 00:15:33.645 1 904 libcrypto.so 00:15:33.645 ----------------------------------------------------- 00:15:33.645 00:15:33.908 00:15:33.908 real 0m12.188s 00:15:33.908 user 0m6.477s 00:15:33.908 sys 0m4.993s 00:15:33.908 ************************************ 00:15:33.908 END TEST xnvme_fio_plugin 00:15:33.908 ************************************ 00:15:33.908 22:11:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.908 22:11:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:33.908 22:11:40 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84605 00:15:33.908 22:11:40 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84605 ']' 00:15:33.908 22:11:40 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84605 00:15:33.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84605) - No such process 00:15:33.908 Process with pid 84605 is not found 00:15:33.908 22:11:40 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84605 is not found' 00:15:33.908 00:15:33.908 real 2m58.198s 00:15:33.908 user 1m28.882s 00:15:33.908 sys 1m15.406s 00:15:33.908 22:11:40 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.908 ************************************ 00:15:33.908 END TEST nvme_xnvme 00:15:33.908 22:11:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.908 ************************************ 00:15:33.908 22:11:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:33.908 22:11:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:33.908 22:11:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.908 22:11:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.908 ************************************ 00:15:33.908 START TEST blockdev_xnvme 00:15:33.908 ************************************ 00:15:33.908 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:33.908 * Looking for test storage... 00:15:33.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:33.908 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:33.908 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:33.908 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:34.170 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.171 22:11:40 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.171 --rc genhtml_branch_coverage=1 00:15:34.171 --rc genhtml_function_coverage=1 00:15:34.171 --rc genhtml_legend=1 00:15:34.171 --rc geninfo_all_blocks=1 00:15:34.171 --rc geninfo_unexecuted_blocks=1 00:15:34.171 00:15:34.171 ' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.171 --rc genhtml_branch_coverage=1 00:15:34.171 --rc genhtml_function_coverage=1 00:15:34.171 --rc genhtml_legend=1 00:15:34.171 --rc geninfo_all_blocks=1 00:15:34.171 --rc geninfo_unexecuted_blocks=1 00:15:34.171 00:15:34.171 ' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.171 --rc genhtml_branch_coverage=1 00:15:34.171 --rc genhtml_function_coverage=1 00:15:34.171 --rc genhtml_legend=1 00:15:34.171 --rc geninfo_all_blocks=1 00:15:34.171 --rc geninfo_unexecuted_blocks=1 00:15:34.171 00:15:34.171 ' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:34.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.171 --rc genhtml_branch_coverage=1 00:15:34.171 --rc genhtml_function_coverage=1 00:15:34.171 --rc genhtml_legend=1 00:15:34.171 --rc geninfo_all_blocks=1 00:15:34.171 --rc geninfo_unexecuted_blocks=1 00:15:34.171 00:15:34.171 ' 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85175 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85175 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 85175 ']' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.171 22:11:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.171 22:11:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:34.171 [2024-12-16 22:11:40.384910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:34.171 [2024-12-16 22:11:40.385083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85175 ] 00:15:34.432 [2024-12-16 22:11:40.552367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.432 [2024-12-16 22:11:40.581826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.004 22:11:41 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.004 22:11:41 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:35.004 22:11:41 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:35.004 22:11:41 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:35.004 22:11:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:35.004 22:11:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:35.004 22:11:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:35.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:36.150 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:36.150 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:36.150 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:36.150 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:36.150 nvme0n1 00:15:36.150 nvme0n2 00:15:36.150 nvme0n3 00:15:36.150 nvme1n1 00:15:36.150 nvme2n1 00:15:36.150 nvme3n1 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.150 22:11:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:36.150 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:36.151 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9b861868-8152-409e-977e-d1a05ea36ff0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b861868-8152-409e-977e-d1a05ea36ff0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "353657a9-cad9-4690-8260-521b395bb86c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "353657a9-cad9-4690-8260-521b395bb86c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "78dcc3ec-8eb6-4ec8-aea4-382a9bfe54d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78dcc3ec-8eb6-4ec8-aea4-382a9bfe54d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "62c60f35-d94c-4731-ba10-0194d32fa092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "62c60f35-d94c-4731-ba10-0194d32fa092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7dc50ecd-a980-46a9-86a0-b369cdf85625"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7dc50ecd-a980-46a9-86a0-b369cdf85625",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "50995d27-2d71-442e-bbae-17ca00365da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "50995d27-2d71-442e-bbae-17ca00365da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:36.412 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:36.412 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:36.412 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:36.412 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 85175 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 85175 ']' 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 85175 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85175 00:15:36.412 killing process with pid 85175 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85175' 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 85175 00:15:36.412 22:11:42 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 85175 00:15:36.672 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:36.672 22:11:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:36.672 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:36.672 22:11:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.672 22:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.672 ************************************ 00:15:36.672 START TEST bdev_hello_world 00:15:36.672 ************************************ 00:15:36.672 22:11:42 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:36.672 [2024-12-16 22:11:42.905118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:36.672 [2024-12-16 22:11:42.905493] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85443 ] 00:15:36.932 [2024-12-16 22:11:43.067268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.932 [2024-12-16 22:11:43.096368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.193 [2024-12-16 22:11:43.316282] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:37.193 [2024-12-16 22:11:43.316353] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:37.193 [2024-12-16 22:11:43.316376] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:37.193 [2024-12-16 22:11:43.318734] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:37.193 [2024-12-16 22:11:43.319484] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:37.193 [2024-12-16 22:11:43.319527] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:37.193 [2024-12-16 22:11:43.320154] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:37.193 00:15:37.193 [2024-12-16 22:11:43.320195] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:37.193 00:15:37.193 real 0m0.667s 00:15:37.193 user 0m0.329s 00:15:37.193 sys 0m0.193s 00:15:37.193 22:11:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.193 ************************************ 00:15:37.193 END TEST bdev_hello_world 00:15:37.193 ************************************ 00:15:37.193 22:11:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:37.454 22:11:43 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:37.454 22:11:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.454 22:11:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.454 22:11:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.454 ************************************ 00:15:37.454 START TEST bdev_bounds 00:15:37.454 ************************************ 00:15:37.454 Process bdevio pid: 85468 00:15:37.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85468 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85468' 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85468 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85468 ']' 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.454 22:11:43 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:37.454 [2024-12-16 22:11:43.645904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:37.454 [2024-12-16 22:11:43.646052] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85468 ] 00:15:37.715 [2024-12-16 22:11:43.808287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.715 [2024-12-16 22:11:43.840246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.715 [2024-12-16 22:11:43.840576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.715 [2024-12-16 22:11:43.840582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.287 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.287 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:38.287 22:11:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:38.287 I/O targets: 00:15:38.287 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:38.287 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:38.287 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:38.287 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:38.287 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:38.287 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:38.287 00:15:38.287 00:15:38.287 CUnit - A unit testing framework for C - Version 2.1-3 00:15:38.287 http://cunit.sourceforge.net/ 00:15:38.287 00:15:38.287 00:15:38.287 Suite: bdevio tests on: nvme3n1 00:15:38.287 Test: blockdev write read block ...passed 00:15:38.287 Test: blockdev write zeroes read block ...passed 00:15:38.287 Test: blockdev write zeroes read no split ...passed 00:15:38.287 Test: blockdev write zeroes read split ...passed 00:15:38.549 Test: blockdev write zeroes read split partial ...passed 00:15:38.549 Test: blockdev reset ...passed 00:15:38.549 Test: blockdev write read 8 blocks ...passed 00:15:38.549 Test: blockdev write read size > 128k ...passed 00:15:38.549 Test: blockdev write read invalid size ...passed 00:15:38.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.549 Test: blockdev write read max offset ...passed 00:15:38.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.549 Test: blockdev writev readv 8 blocks ...passed 00:15:38.549 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.549 Test: blockdev writev readv block ...passed 00:15:38.549 Test: blockdev writev readv size > 128k ...passed 00:15:38.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.549 Test: blockdev comparev and writev ...passed 00:15:38.549 Test: blockdev nvme passthru rw ...passed 00:15:38.549 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.549 Test: blockdev nvme admin passthru ...passed 00:15:38.549 Test: blockdev copy ...passed 00:15:38.549 Suite: bdevio tests on: nvme2n1 00:15:38.549 Test: blockdev write read block ...passed 00:15:38.549 Test: blockdev write zeroes read block ...passed 00:15:38.549 Test: blockdev write zeroes read no split ...passed 00:15:38.549 Test: blockdev write zeroes read split ...passed 00:15:38.549 Test: blockdev write zeroes read split partial ...passed 00:15:38.549 Test: blockdev reset ...passed 00:15:38.549 Test: blockdev write read 8 blocks ...passed 00:15:38.549 Test: blockdev write read size > 128k ...passed 00:15:38.549 Test: blockdev write read invalid size ...passed 00:15:38.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.550 Test: blockdev write read max offset ...passed 00:15:38.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.550 Test: blockdev writev readv 8 blocks ...passed 00:15:38.550 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.550 Test: blockdev writev readv block ...passed 00:15:38.550 Test: blockdev writev readv size > 128k ...passed 00:15:38.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.550 Test: blockdev comparev and writev ...passed 00:15:38.550 Test: blockdev nvme passthru rw ...passed 00:15:38.550 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.550 Test: blockdev nvme admin passthru ...passed 00:15:38.550 Test: blockdev copy ...passed 00:15:38.550 Suite: bdevio tests on: nvme1n1 00:15:38.550 Test: blockdev write read block ...passed 00:15:38.550 Test: blockdev write zeroes read block ...passed 00:15:38.550 Test: blockdev write zeroes read no split ...passed 00:15:38.550 Test: blockdev write zeroes read split ...passed 00:15:38.550 Test: blockdev write zeroes read split partial ...passed 00:15:38.550 Test: blockdev reset ...passed 00:15:38.550 Test: blockdev write read 8 blocks ...passed 00:15:38.550 Test: blockdev write read size > 128k ...passed 00:15:38.550 Test: blockdev write read invalid size ...passed 00:15:38.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.550 Test: blockdev write read max offset ...passed 00:15:38.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.550 Test: blockdev writev readv 8 blocks ...passed 00:15:38.550 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.550 Test: blockdev writev readv block ...passed 00:15:38.550 Test: blockdev writev readv size > 128k ...passed 00:15:38.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.550 Test: blockdev comparev and writev ...passed 00:15:38.550 Test: blockdev nvme passthru rw ...passed 00:15:38.550 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.550 Test: blockdev nvme admin passthru ...passed 00:15:38.550 Test: blockdev copy ...passed 00:15:38.550 Suite: bdevio tests on: nvme0n3 00:15:38.550 Test: blockdev write read block ...passed 00:15:38.550 Test: blockdev write zeroes read block ...passed 00:15:38.550 Test: blockdev write zeroes read no split ...passed 00:15:38.550 Test: blockdev write zeroes read split ...passed 00:15:38.550 Test: blockdev write zeroes read split partial ...passed 00:15:38.550 Test: blockdev reset ...passed 00:15:38.550 Test: blockdev write read 8 blocks ...passed 00:15:38.550 Test: blockdev write read size > 128k ...passed 00:15:38.550 Test: blockdev write read invalid size ...passed 00:15:38.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.550 Test: blockdev write read max offset ...passed 00:15:38.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.550 Test: blockdev writev readv 8 blocks ...passed 00:15:38.550 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.550 Test: blockdev writev readv block ...passed 00:15:38.550 Test: blockdev writev readv size > 128k ...passed 00:15:38.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.550 Test: blockdev comparev and writev ...passed 00:15:38.550 Test: blockdev nvme passthru rw ...passed 00:15:38.550 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.550 Test: blockdev nvme admin passthru ...passed 00:15:38.550 Test: blockdev copy ...passed 00:15:38.550 Suite: bdevio tests on: nvme0n2 00:15:38.550 Test: blockdev write read block ...passed 00:15:38.550 Test: blockdev write zeroes read block ...passed 00:15:38.550 Test: blockdev write zeroes read no split ...passed 00:15:38.550 Test: blockdev write zeroes read split ...passed 00:15:38.550 Test: blockdev write zeroes read split partial ...passed 00:15:38.550 Test: blockdev reset ...passed 00:15:38.550 Test: blockdev write read 8 blocks ...passed 00:15:38.550 Test: blockdev write read size > 128k ...passed 00:15:38.550 Test: blockdev write read invalid size ...passed 00:15:38.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.550 Test: blockdev write read max offset ...passed 00:15:38.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.550 Test: blockdev writev readv 8 blocks ...passed 00:15:38.550 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.550 Test: blockdev writev readv block ...passed 00:15:38.550 Test: blockdev writev readv size > 128k ...passed 00:15:38.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.550 Test: blockdev comparev and writev ...passed 00:15:38.550 Test: blockdev nvme passthru rw ...passed 00:15:38.550 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.550 Test: blockdev nvme admin passthru ...passed 00:15:38.550 Test: blockdev copy ...passed 00:15:38.550 Suite: bdevio tests on: nvme0n1 00:15:38.550 Test: blockdev write read block ...passed 00:15:38.550 Test: blockdev write zeroes read block ...passed 00:15:38.550 Test: blockdev write zeroes read no split ...passed 00:15:38.550 Test: blockdev write zeroes read split ...passed 00:15:38.550 Test: blockdev write zeroes read split partial ...passed 00:15:38.550 Test: blockdev reset ...passed 00:15:38.550 Test: blockdev write read 8 blocks ...passed 00:15:38.550 Test: blockdev write read size > 128k ...passed 00:15:38.550 Test: blockdev write read invalid size ...passed 00:15:38.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:38.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:38.550 Test: blockdev write read max offset ...passed 00:15:38.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:38.550 Test: blockdev writev readv 8 blocks ...passed 00:15:38.550 Test: blockdev writev readv 30 x 1block ...passed 00:15:38.550 Test: blockdev writev readv block ...passed 00:15:38.550 Test: blockdev writev readv size > 128k ...passed 00:15:38.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:38.812 Test: blockdev comparev and writev ...passed 00:15:38.812 Test: blockdev nvme passthru rw ...passed 00:15:38.812 Test: blockdev nvme passthru vendor specific ...passed 00:15:38.812 Test: blockdev nvme admin passthru ...passed 00:15:38.812 Test: blockdev copy ...passed 00:15:38.812 00:15:38.812 Run Summary: Type Total Ran Passed Failed Inactive 00:15:38.812 suites 6 6 n/a 0 0 00:15:38.812 tests 138 138 138 0 0 00:15:38.812 asserts 780 780 780 0 n/a 00:15:38.812 00:15:38.812 Elapsed time = 0.635 seconds 00:15:38.812 0 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85468 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85468 ']' 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85468 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85468 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85468' 00:15:38.812 killing process with pid 85468 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85468 00:15:38.812 22:11:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85468 00:15:39.072 22:11:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:39.072 00:15:39.072 real 0m1.588s 00:15:39.072 user 0m3.876s 00:15:39.072 sys 0m0.351s 00:15:39.072 22:11:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.072 22:11:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:39.072 ************************************ 00:15:39.072 END TEST bdev_bounds 00:15:39.072 ************************************ 00:15:39.072 22:11:45 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:39.072 22:11:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:39.072 22:11:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.072 22:11:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.072 ************************************ 00:15:39.072 START TEST bdev_nbd 00:15:39.072 ************************************ 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85517 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85517 /var/tmp/spdk-nbd.sock 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85517 ']' 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:39.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.072 22:11:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:39.072 [2024-12-16 22:11:45.318816] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:39.072 [2024-12-16 22:11:45.318960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.333 [2024-12-16 22:11:45.480702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.333 [2024-12-16 22:11:45.509199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.905 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.166 1+0 records in 00:15:40.166 1+0 records out 00:15:40.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116579 s, 3.5 MB/s 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.166 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.428 1+0 records in 00:15:40.428 1+0 records out 00:15:40.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125092 s, 3.3 MB/s 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.428 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.689 1+0 records in 00:15:40.689 1+0 records out 00:15:40.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142174 s, 2.9 MB/s 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.689 22:11:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.951 1+0 records in 00:15:40.951 1+0 records out 00:15:40.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130734 s, 3.1 MB/s 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.951 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.214 1+0 records in 00:15:41.214 1+0 records out 00:15:41.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129224 s, 3.2 MB/s 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:41.214 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.555 1+0 records in 00:15:41.555 1+0 records out 00:15:41.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110462 s, 3.7 MB/s 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:41.555 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.816 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:41.816 { 00:15:41.816 "nbd_device": "/dev/nbd0", 00:15:41.816 "bdev_name": "nvme0n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd1", 00:15:41.817 "bdev_name": "nvme0n2" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd2", 00:15:41.817 "bdev_name": "nvme0n3" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd3", 00:15:41.817 "bdev_name": "nvme1n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd4", 00:15:41.817 "bdev_name": "nvme2n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd5", 00:15:41.817 "bdev_name": "nvme3n1" 00:15:41.817 } 00:15:41.817 ]' 00:15:41.817 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:41.817 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:41.817 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd0", 00:15:41.817 "bdev_name": "nvme0n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd1", 00:15:41.817 "bdev_name": "nvme0n2" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd2", 00:15:41.817 "bdev_name": "nvme0n3" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd3", 00:15:41.817 "bdev_name": "nvme1n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd4", 00:15:41.817 "bdev_name": "nvme2n1" 00:15:41.817 }, 00:15:41.817 { 00:15:41.817 "nbd_device": "/dev/nbd5", 00:15:41.817 "bdev_name": "nvme3n1" 00:15:41.817 } 00:15:41.817 ]' 00:15:41.817 22:11:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.817 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.078 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.339 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.600 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:42.601 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.601 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.601 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.601 22:11:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.862 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.123 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.383 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:43.643 /dev/nbd0 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.643 1+0 records in 00:15:43.643 1+0 records out 00:15:43.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768354 s, 5.3 MB/s 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.643 22:11:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:43.904 /dev/nbd1 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.904 1+0 records in 00:15:43.904 1+0 records out 00:15:43.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855583 s, 4.8 MB/s 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.904 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:44.165 /dev/nbd10 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.165 1+0 records in 00:15:44.165 1+0 records out 00:15:44.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887611 s, 4.6 MB/s 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:44.165 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:44.426 /dev/nbd11 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.426 1+0 records in 00:15:44.426 1+0 records out 00:15:44.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105787 s, 3.9 MB/s 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:44.426 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:44.688 /dev/nbd12 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.688 1+0 records in 00:15:44.688 1+0 records out 00:15:44.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139429 s, 2.9 MB/s 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:44.688 22:11:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:44.688 /dev/nbd13 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.948 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.949 1+0 records in 00:15:44.949 1+0 records out 00:15:44.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124922 s, 3.3 MB/s 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.949 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd0", 00:15:45.210 "bdev_name": "nvme0n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd1", 00:15:45.210 "bdev_name": "nvme0n2" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd10", 00:15:45.210 "bdev_name": "nvme0n3" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd11", 00:15:45.210 "bdev_name": "nvme1n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd12", 00:15:45.210 "bdev_name": "nvme2n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd13", 00:15:45.210 "bdev_name": "nvme3n1" 00:15:45.210 } 00:15:45.210 ]' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd0", 00:15:45.210 "bdev_name": "nvme0n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd1", 00:15:45.210 "bdev_name": "nvme0n2" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd10", 00:15:45.210 "bdev_name": "nvme0n3" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd11", 00:15:45.210 "bdev_name": "nvme1n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd12", 00:15:45.210 "bdev_name": "nvme2n1" 00:15:45.210 }, 00:15:45.210 { 00:15:45.210 "nbd_device": "/dev/nbd13", 00:15:45.210 "bdev_name": "nvme3n1" 00:15:45.210 } 00:15:45.210 ]' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:45.210 /dev/nbd1 00:15:45.210 /dev/nbd10 00:15:45.210 /dev/nbd11 00:15:45.210 /dev/nbd12 00:15:45.210 /dev/nbd13' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:45.210 /dev/nbd1 00:15:45.210 /dev/nbd10 00:15:45.210 /dev/nbd11 00:15:45.210 /dev/nbd12 00:15:45.210 /dev/nbd13' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:45.210 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:45.211 256+0 records in 00:15:45.211 256+0 records out 00:15:45.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683599 s, 153 MB/s 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.211 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:45.211 256+0 records in 00:15:45.211 256+0 records out 00:15:45.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202345 s, 5.2 MB/s 00:15:45.471 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.471 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:45.471 256+0 records in 00:15:45.471 256+0 records out 00:15:45.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189121 s, 5.5 MB/s 00:15:45.471 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.471 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:45.731 256+0 records in 00:15:45.731 256+0 records out 00:15:45.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234686 s, 4.5 MB/s 00:15:45.731 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.731 22:11:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:45.992 256+0 records in 00:15:45.992 256+0 records out 00:15:45.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224622 s, 4.7 MB/s 00:15:45.992 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.992 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:46.253 256+0 records in 00:15:46.253 256+0 records out 00:15:46.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256906 s, 4.1 MB/s 00:15:46.253 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:46.253 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:46.514 256+0 records in 00:15:46.514 256+0 records out 00:15:46.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141545 s, 7.4 MB/s 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.514 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.775 22:11:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.037 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.299 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.559 22:11:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.821 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:48.081 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:48.342 malloc_lvol_verify 00:15:48.342 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:48.342 41a58ef5-8973-4f2a-a820-6c991471096d 00:15:48.603 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:48.603 3ef28b30-dd5d-4563-8fc2-6937bd1bc065 00:15:48.603 22:11:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:48.864 /dev/nbd0 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:48.864 mke2fs 1.47.0 (5-Feb-2023) 00:15:48.864 Discarding device blocks: 0/4096 done 00:15:48.864 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:48.864 00:15:48.864 Allocating group tables: 0/1 done 00:15:48.864 Writing inode tables: 0/1 done 00:15:48.864 Creating journal (1024 blocks): done 00:15:48.864 Writing superblocks and filesystem accounting information: 0/1 done 00:15:48.864 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.864 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85517 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85517 ']' 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85517 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85517 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.124 killing process with pid 85517 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85517' 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85517 00:15:49.124 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85517 00:15:49.386 22:11:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:49.386 00:15:49.386 real 0m10.251s 00:15:49.386 user 0m13.999s 00:15:49.386 sys 0m3.740s 00:15:49.386 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.386 22:11:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 ************************************ 00:15:49.386 END TEST bdev_nbd 00:15:49.386 ************************************ 00:15:49.386 22:11:55 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:49.386 22:11:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:49.386 22:11:55 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:49.386 22:11:55 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:49.386 22:11:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.386 22:11:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.386 22:11:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 ************************************ 00:15:49.386 START TEST bdev_fio 00:15:49.386 ************************************ 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:49.386 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 ************************************ 00:15:49.386 START TEST bdev_fio_rw_verify 00:15:49.386 ************************************ 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.386 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:49.387 22:11:55 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:49.648 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.648 fio-3.35 00:15:49.648 Starting 6 threads 00:16:01.888 00:16:01.888 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85920: Mon Dec 16 22:12:06 2024 00:16:01.888 read: IOPS=17.0k, BW=66.5MiB/s (69.7MB/s)(665MiB/10001msec) 00:16:01.888 slat (usec): min=2, max=2612, avg= 6.50, stdev=19.88 00:16:01.888 clat (usec): min=71, max=1080.0k, avg=1154.05, stdev=7432.21 00:16:01.889 lat (usec): min=77, max=1080.0k, avg=1160.55, stdev=7432.35 00:16:01.889 clat percentiles (usec): 00:16:01.889 | 50.000th=[ 979], 99.000th=[ 3392], 99.900th=[ 4621], 00:16:01.889 | 99.990th=[ 6915], 99.999th=[1082131] 00:16:01.889 write: IOPS=17.3k, BW=67.6MiB/s (70.9MB/s)(677MiB/10001msec); 0 zone resets 00:16:01.889 slat (usec): min=10, max=5221, avg=37.53, stdev=121.77 00:16:01.889 clat (usec): min=76, max=9519, avg=1331.43, stdev=815.74 00:16:01.889 lat (usec): min=90, max=9587, avg=1368.96, stdev=827.68 00:16:01.889 clat percentiles (usec): 00:16:01.889 | 50.000th=[ 1188], 99.000th=[ 3916], 99.900th=[ 5211], 99.990th=[ 7308], 00:16:01.889 | 99.999th=[ 8586] 00:16:01.889 bw ( KiB/s): min=49109, max=120720, per=100.00%, avg=71283.49, stdev=3612.59, samples=112 00:16:01.889 iops : min=12275, max=30180, avg=17820.05, stdev=903.18, samples=112 00:16:01.889 lat (usec) : 100=0.02%, 250=5.78%, 500=13.05%, 750=13.85%, 1000=12.87% 00:16:01.889 lat (msec) : 2=39.53%, 4=14.30%, 10=0.60%, 2000=0.01% 00:16:01.889 cpu : usr=43.62%, sys=32.57%, ctx=5113, majf=0, minf=16491 00:16:01.889 IO depths : 1=11.4%, 2=23.9%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.889 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.889 issued rwts: total=170274,173202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.889 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:01.889 00:16:01.889 Run status group 0 (all jobs): 00:16:01.889 READ: bw=66.5MiB/s (69.7MB/s), 66.5MiB/s-66.5MiB/s (69.7MB/s-69.7MB/s), io=665MiB (697MB), run=10001-10001msec 00:16:01.889 WRITE: bw=67.6MiB/s (70.9MB/s), 67.6MiB/s-67.6MiB/s (70.9MB/s-70.9MB/s), io=677MiB (709MB), run=10001-10001msec 00:16:01.889 ----------------------------------------------------- 00:16:01.889 Suppressions used: 00:16:01.889 count bytes template 00:16:01.889 6 48 /usr/src/fio/parse.c 00:16:01.889 2817 270432 /usr/src/fio/iolog.c 00:16:01.889 1 8 libtcmalloc_minimal.so 00:16:01.889 1 904 libcrypto.so 00:16:01.889 ----------------------------------------------------- 00:16:01.889 00:16:01.889 00:16:01.889 real 0m11.186s 00:16:01.889 user 0m26.883s 00:16:01.889 sys 0m19.877s 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.889 ************************************ 00:16:01.889 END TEST bdev_fio_rw_verify 00:16:01.889 ************************************ 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9b861868-8152-409e-977e-d1a05ea36ff0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b861868-8152-409e-977e-d1a05ea36ff0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "353657a9-cad9-4690-8260-521b395bb86c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "353657a9-cad9-4690-8260-521b395bb86c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "78dcc3ec-8eb6-4ec8-aea4-382a9bfe54d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78dcc3ec-8eb6-4ec8-aea4-382a9bfe54d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "62c60f35-d94c-4731-ba10-0194d32fa092"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "62c60f35-d94c-4731-ba10-0194d32fa092",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7dc50ecd-a980-46a9-86a0-b369cdf85625"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7dc50ecd-a980-46a9-86a0-b369cdf85625",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "50995d27-2d71-442e-bbae-17ca00365da1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "50995d27-2d71-442e-bbae-17ca00365da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.889 /home/vagrant/spdk_repo/spdk 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:01.889 00:16:01.889 real 0m11.356s 00:16:01.889 user 0m26.951s 00:16:01.889 sys 0m19.953s 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.889 ************************************ 00:16:01.889 END TEST bdev_fio 00:16:01.889 ************************************ 00:16:01.889 22:12:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 22:12:06 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:01.889 22:12:06 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:01.889 22:12:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:01.889 22:12:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.889 22:12:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.889 ************************************ 00:16:01.889 START TEST bdev_verify 00:16:01.889 ************************************ 00:16:01.889 22:12:06 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:01.890 [2024-12-16 22:12:07.050625] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:01.890 [2024-12-16 22:12:07.050761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86087 ] 00:16:01.890 [2024-12-16 22:12:07.212827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.890 [2024-12-16 22:12:07.243201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.890 [2024-12-16 22:12:07.243303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.890 Running I/O for 5 seconds... 00:16:03.406 24544.00 IOPS, 95.88 MiB/s [2024-12-16T22:12:11.139Z] 23008.00 IOPS, 89.88 MiB/s [2024-12-16T22:12:12.082Z] 23306.67 IOPS, 91.04 MiB/s [2024-12-16T22:12:12.655Z] 23440.00 IOPS, 91.56 MiB/s 00:16:06.308 Latency(us) 00:16:06.308 [2024-12-16T22:12:12.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.308 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x0 length 0x80000 00:16:06.308 nvme0n1 : 5.04 1726.54 6.74 0.00 0.00 73987.10 7813.91 67350.84 00:16:06.308 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x80000 length 0x80000 00:16:06.308 nvme0n1 : 5.02 1988.45 7.77 0.00 0.00 64261.02 7914.73 67350.84 00:16:06.308 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x0 length 0x80000 00:16:06.308 nvme0n2 : 5.03 1729.01 6.75 0.00 0.00 73699.97 9175.04 71787.13 00:16:06.308 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x80000 length 0x80000 00:16:06.308 nvme0n2 : 5.02 1987.81 7.76 0.00 0.00 64179.69 5242.88 62511.26 00:16:06.308 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x0 length 0x80000 00:16:06.308 nvme0n3 : 5.04 1725.67 6.74 0.00 0.00 73673.12 6704.84 72593.72 00:16:06.308 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x80000 length 0x80000 00:16:06.308 nvme0n3 : 5.04 2005.28 7.83 0.00 0.00 63509.65 6276.33 68560.74 00:16:06.308 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.308 Verification LBA range: start 0x0 length 0x20000 00:16:06.309 nvme1n1 : 5.04 1727.26 6.75 0.00 0.00 73428.69 6654.42 65737.65 00:16:06.309 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.309 Verification LBA range: start 0x20000 length 0x20000 00:16:06.309 nvme1n1 : 5.05 2000.73 7.82 0.00 0.00 63551.21 11040.30 68560.74 00:16:06.309 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.309 Verification LBA range: start 0x0 length 0xbd0bd 00:16:06.309 nvme2n1 : 5.07 2327.98 9.09 0.00 0.00 54336.55 6503.19 61704.66 00:16:06.309 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.309 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:06.309 nvme2n1 : 5.07 2614.48 10.21 0.00 0.00 48495.95 5671.38 52832.10 00:16:06.309 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:06.309 Verification LBA range: start 0x0 length 0xa0000 00:16:06.309 nvme3n1 : 5.07 1666.87 6.51 0.00 0.00 75903.10 6704.84 79046.50 00:16:06.309 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:06.309 Verification LBA range: start 0xa0000 length 0xa0000 00:16:06.309 nvme3n1 : 5.07 1539.52 6.01 0.00 0.00 82326.65 5016.02 93565.24 00:16:06.309 [2024-12-16T22:12:12.656Z] =================================================================================================================== 00:16:06.309 [2024-12-16T22:12:12.656Z] Total : 23039.62 90.00 0.00 0.00 66206.59 5016.02 93565.24 00:16:06.571 00:16:06.571 real 0m5.845s 00:16:06.571 user 0m9.229s 00:16:06.571 sys 0m1.557s 00:16:06.571 22:12:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.571 ************************************ 00:16:06.571 END TEST bdev_verify 00:16:06.571 ************************************ 00:16:06.571 22:12:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:06.571 22:12:12 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.571 22:12:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:06.571 22:12:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.571 22:12:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.571 ************************************ 00:16:06.571 START TEST bdev_verify_big_io 00:16:06.571 ************************************ 00:16:06.571 22:12:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:06.834 [2024-12-16 22:12:12.965477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:06.834 [2024-12-16 22:12:12.965609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86177 ] 00:16:06.834 [2024-12-16 22:12:13.128512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.834 [2024-12-16 22:12:13.159130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.834 [2024-12-16 22:12:13.159190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.412 Running I/O for 5 seconds... 00:16:13.271 2584.00 IOPS, 161.50 MiB/s [2024-12-16T22:12:20.189Z] 3699.00 IOPS, 231.19 MiB/s [2024-12-16T22:12:20.189Z] 3340.67 IOPS, 208.79 MiB/s 00:16:13.842 Latency(us) 00:16:13.842 [2024-12-16T22:12:20.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.842 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0x8000 00:16:13.842 nvme0n1 : 6.03 84.94 5.31 0.00 0.00 1462182.20 98404.82 1780966.01 00:16:13.842 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x8000 length 0x8000 00:16:13.842 nvme0n1 : 5.97 150.03 9.38 0.00 0.00 812665.12 5217.67 1038896.84 00:16:13.842 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0x8000 00:16:13.842 nvme0n2 : 5.99 83.52 5.22 0.00 0.00 1386247.32 58074.98 1568024.42 00:16:13.842 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x8000 length 0x8000 00:16:13.842 nvme0n2 : 5.88 138.69 8.67 0.00 0.00 839129.85 23290.49 1058255.16 00:16:13.842 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0x8000 00:16:13.842 nvme0n3 : 6.03 84.93 5.31 0.00 0.00 1274919.95 6326.74 1380893.93 00:16:13.842 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x8000 length 0x8000 00:16:13.842 nvme0n3 : 5.89 142.71 8.92 0.00 0.00 817006.22 47185.92 784012.21 00:16:13.842 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0x2000 00:16:13.842 nvme1n1 : 6.06 73.98 4.62 0.00 0.00 1413035.49 17140.18 1606741.07 00:16:13.842 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x2000 length 0x2000 00:16:13.842 nvme1n1 : 5.97 136.77 8.55 0.00 0.00 833667.10 78643.20 1871304.86 00:16:13.842 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0xbd0b 00:16:13.842 nvme2n1 : 6.29 183.29 11.46 0.00 0.00 546243.98 3604.48 1058255.16 00:16:13.842 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:13.842 nvme2n1 : 5.89 156.48 9.78 0.00 0.00 713760.20 102841.11 1497043.89 00:16:13.842 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0x0 length 0xa000 00:16:13.842 nvme3n1 : 6.53 274.55 17.16 0.00 0.00 347958.34 724.68 3639365.32 00:16:13.842 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:13.842 Verification LBA range: start 0xa000 length 0xa000 00:16:13.842 nvme3n1 : 5.98 157.97 9.87 0.00 0.00 690510.28 3188.58 1426063.36 00:16:13.842 [2024-12-16T22:12:20.189Z] =================================================================================================================== 00:16:13.842 [2024-12-16T22:12:20.189Z] Total : 1667.86 104.24 0.00 0.00 795048.12 724.68 3639365.32 00:16:13.842 00:16:13.842 real 0m7.271s 00:16:13.842 user 0m13.459s 00:16:13.842 sys 0m0.422s 00:16:13.842 22:12:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.842 ************************************ 00:16:13.842 END TEST bdev_verify_big_io 00:16:13.842 ************************************ 00:16:13.842 22:12:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.103 22:12:20 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.103 22:12:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:14.103 22:12:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.103 22:12:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.103 ************************************ 00:16:14.103 START TEST bdev_write_zeroes 00:16:14.103 ************************************ 00:16:14.103 22:12:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.103 [2024-12-16 22:12:20.293868] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:14.103 [2024-12-16 22:12:20.293988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86283 ] 00:16:14.364 [2024-12-16 22:12:20.451757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.364 [2024-12-16 22:12:20.475636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.364 Running I/O for 1 seconds... 00:16:15.750 80768.00 IOPS, 315.50 MiB/s 00:16:15.750 Latency(us) 00:16:15.750 [2024-12-16T22:12:22.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.750 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme0n1 : 1.02 13305.68 51.98 0.00 0.00 9610.45 4889.99 23189.66 00:16:15.750 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme0n2 : 1.01 13243.65 51.73 0.00 0.00 9648.52 6452.78 22483.89 00:16:15.750 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme0n3 : 1.02 13227.93 51.67 0.00 0.00 9650.11 6553.60 21778.12 00:16:15.750 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme1n1 : 1.02 13212.96 51.61 0.00 0.00 9651.01 6553.60 21072.34 00:16:15.750 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme2n1 : 1.02 14112.96 55.13 0.00 0.00 9018.72 4763.96 16938.54 00:16:15.750 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:15.750 nvme3n1 : 1.02 13195.18 51.54 0.00 0.00 9599.58 3856.54 23391.31 00:16:15.750 [2024-12-16T22:12:22.097Z] =================================================================================================================== 00:16:15.750 [2024-12-16T22:12:22.097Z] Total : 80298.36 313.67 0.00 0.00 9523.66 3856.54 23391.31 00:16:15.750 00:16:15.750 real 0m1.692s 00:16:15.750 user 0m1.023s 00:16:15.750 sys 0m0.482s 00:16:15.750 22:12:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.750 22:12:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:15.750 ************************************ 00:16:15.750 END TEST bdev_write_zeroes 00:16:15.750 ************************************ 00:16:15.750 22:12:21 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.750 22:12:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:15.750 22:12:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.750 22:12:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.750 ************************************ 00:16:15.750 START TEST bdev_json_nonenclosed 00:16:15.750 ************************************ 00:16:15.750 22:12:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:15.750 [2024-12-16 22:12:22.054996] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:15.750 [2024-12-16 22:12:22.055150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86320 ] 00:16:16.010 [2024-12-16 22:12:22.212676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.010 [2024-12-16 22:12:22.244102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.010 [2024-12-16 22:12:22.244218] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:16.010 [2024-12-16 22:12:22.244235] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:16.010 [2024-12-16 22:12:22.244247] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:16.010 00:16:16.010 real 0m0.342s 00:16:16.010 user 0m0.129s 00:16:16.010 sys 0m0.107s 00:16:16.010 ************************************ 00:16:16.010 END TEST bdev_json_nonenclosed 00:16:16.010 ************************************ 00:16:16.010 22:12:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.010 22:12:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.271 22:12:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:16.271 22:12:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.271 22:12:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.271 ************************************ 00:16:16.271 START TEST bdev_json_nonarray 00:16:16.271 ************************************ 00:16:16.271 22:12:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.271 [2024-12-16 22:12:22.442853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:16.271 [2024-12-16 22:12:22.442975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86345 ] 00:16:16.271 [2024-12-16 22:12:22.601763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.532 [2024-12-16 22:12:22.626202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.533 [2024-12-16 22:12:22.626305] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:16.533 [2024-12-16 22:12:22.626321] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:16.533 [2024-12-16 22:12:22.626332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:16.533 00:16:16.533 real 0m0.316s 00:16:16.533 user 0m0.125s 00:16:16.533 sys 0m0.086s 00:16:16.533 22:12:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.533 ************************************ 00:16:16.533 END TEST bdev_json_nonarray 00:16:16.533 ************************************ 00:16:16.533 22:12:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:16.533 22:12:22 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:17.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:23.686 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:23.686 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.238 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.238 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:26.238 00:16:26.238 real 0m52.291s 00:16:26.238 user 1m12.810s 00:16:26.238 sys 0m42.608s 00:16:26.238 22:12:32 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.238 ************************************ 00:16:26.238 END TEST blockdev_xnvme 00:16:26.238 ************************************ 00:16:26.238 22:12:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.238 22:12:32 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:26.238 22:12:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.238 22:12:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.238 22:12:32 -- common/autotest_common.sh@10 -- # set +x 00:16:26.238 ************************************ 00:16:26.238 START TEST ublk 00:16:26.238 ************************************ 00:16:26.238 22:12:32 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:26.238 * Looking for test storage... 00:16:26.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:26.238 22:12:32 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.238 22:12:32 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.238 22:12:32 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.531 22:12:32 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.531 22:12:32 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.531 22:12:32 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.531 22:12:32 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.531 22:12:32 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.531 22:12:32 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:26.531 22:12:32 ublk -- scripts/common.sh@345 -- # : 1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.531 22:12:32 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.531 22:12:32 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@353 -- # local d=1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.531 22:12:32 ublk -- scripts/common.sh@355 -- # echo 1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.531 22:12:32 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@353 -- # local d=2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.531 22:12:32 ublk -- scripts/common.sh@355 -- # echo 2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.531 22:12:32 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.531 22:12:32 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.531 22:12:32 ublk -- scripts/common.sh@368 -- # return 0 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.531 --rc genhtml_branch_coverage=1 00:16:26.531 --rc genhtml_function_coverage=1 00:16:26.531 --rc genhtml_legend=1 00:16:26.531 --rc geninfo_all_blocks=1 00:16:26.531 --rc geninfo_unexecuted_blocks=1 00:16:26.531 00:16:26.531 ' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.531 --rc genhtml_branch_coverage=1 00:16:26.531 --rc genhtml_function_coverage=1 00:16:26.531 --rc genhtml_legend=1 00:16:26.531 --rc geninfo_all_blocks=1 00:16:26.531 --rc geninfo_unexecuted_blocks=1 00:16:26.531 00:16:26.531 ' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.531 --rc genhtml_branch_coverage=1 00:16:26.531 --rc genhtml_function_coverage=1 00:16:26.531 --rc genhtml_legend=1 00:16:26.531 --rc geninfo_all_blocks=1 00:16:26.531 --rc geninfo_unexecuted_blocks=1 00:16:26.531 00:16:26.531 ' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.531 --rc genhtml_branch_coverage=1 00:16:26.531 --rc genhtml_function_coverage=1 00:16:26.531 --rc genhtml_legend=1 00:16:26.531 --rc geninfo_all_blocks=1 00:16:26.531 --rc geninfo_unexecuted_blocks=1 00:16:26.531 00:16:26.531 ' 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:26.531 22:12:32 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:26.531 22:12:32 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:26.531 22:12:32 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:26.531 22:12:32 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:26.531 22:12:32 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:26.531 22:12:32 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:26.531 22:12:32 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:26.531 22:12:32 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:26.531 22:12:32 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.531 22:12:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.531 ************************************ 00:16:26.531 START TEST test_save_ublk_config 00:16:26.531 ************************************ 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86656 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86656 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86656 ']' 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.531 22:12:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:26.531 [2024-12-16 22:12:32.765580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:26.531 [2024-12-16 22:12:32.765728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86656 ] 00:16:26.815 [2024-12-16 22:12:32.924709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.815 [2024-12-16 22:12:32.953503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.396 [2024-12-16 22:12:33.632861] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:27.396 [2024-12-16 22:12:33.633864] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:27.396 malloc0 00:16:27.396 [2024-12-16 22:12:33.664987] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:27.396 [2024-12-16 22:12:33.665082] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:27.396 [2024-12-16 22:12:33.665092] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:27.396 [2024-12-16 22:12:33.665107] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.396 [2024-12-16 22:12:33.673954] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.396 [2024-12-16 22:12:33.673990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.396 [2024-12-16 22:12:33.680869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.396 [2024-12-16 22:12:33.680994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:27.396 [2024-12-16 22:12:33.697870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.396 0 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.396 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.658 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.658 22:12:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:27.658 "subsystems": [ 00:16:27.658 { 00:16:27.658 "subsystem": "fsdev", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "fsdev_set_opts", 00:16:27.658 "params": { 00:16:27.658 "fsdev_io_pool_size": 65535, 00:16:27.658 "fsdev_io_cache_size": 256 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "keyring", 00:16:27.658 "config": [] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "iobuf", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "iobuf_set_options", 00:16:27.658 "params": { 00:16:27.658 "small_pool_count": 8192, 00:16:27.658 "large_pool_count": 1024, 00:16:27.658 "small_bufsize": 8192, 00:16:27.658 "large_bufsize": 135168, 00:16:27.658 "enable_numa": false 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "sock", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "sock_set_default_impl", 00:16:27.658 "params": { 00:16:27.658 "impl_name": "posix" 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "sock_impl_set_options", 00:16:27.658 "params": { 00:16:27.658 "impl_name": "ssl", 00:16:27.658 "recv_buf_size": 4096, 00:16:27.658 "send_buf_size": 4096, 00:16:27.658 "enable_recv_pipe": true, 00:16:27.658 "enable_quickack": false, 00:16:27.658 "enable_placement_id": 0, 00:16:27.658 "enable_zerocopy_send_server": true, 00:16:27.658 "enable_zerocopy_send_client": false, 00:16:27.658 "zerocopy_threshold": 0, 00:16:27.658 "tls_version": 0, 00:16:27.658 "enable_ktls": false 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "sock_impl_set_options", 00:16:27.658 "params": { 00:16:27.658 "impl_name": "posix", 00:16:27.658 "recv_buf_size": 2097152, 00:16:27.658 "send_buf_size": 2097152, 00:16:27.658 "enable_recv_pipe": true, 00:16:27.658 "enable_quickack": false, 00:16:27.658 "enable_placement_id": 0, 00:16:27.658 "enable_zerocopy_send_server": true, 00:16:27.658 "enable_zerocopy_send_client": false, 00:16:27.658 "zerocopy_threshold": 0, 00:16:27.658 "tls_version": 0, 00:16:27.658 "enable_ktls": false 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "vmd", 00:16:27.658 "config": [] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "accel", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "accel_set_options", 00:16:27.658 "params": { 00:16:27.658 "small_cache_size": 128, 00:16:27.658 "large_cache_size": 16, 00:16:27.658 "task_count": 2048, 00:16:27.658 "sequence_count": 2048, 00:16:27.658 "buf_count": 2048 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "bdev", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "bdev_set_options", 00:16:27.658 "params": { 00:16:27.658 "bdev_io_pool_size": 65535, 00:16:27.658 "bdev_io_cache_size": 256, 00:16:27.658 "bdev_auto_examine": true, 00:16:27.658 "iobuf_small_cache_size": 128, 00:16:27.658 "iobuf_large_cache_size": 16 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_raid_set_options", 00:16:27.658 "params": { 00:16:27.658 "process_window_size_kb": 1024, 00:16:27.658 "process_max_bandwidth_mb_sec": 0 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_iscsi_set_options", 00:16:27.658 "params": { 00:16:27.658 "timeout_sec": 30 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_nvme_set_options", 00:16:27.658 "params": { 00:16:27.658 "action_on_timeout": "none", 00:16:27.658 "timeout_us": 0, 00:16:27.658 "timeout_admin_us": 0, 00:16:27.658 "keep_alive_timeout_ms": 10000, 00:16:27.658 "arbitration_burst": 0, 00:16:27.658 "low_priority_weight": 0, 00:16:27.658 "medium_priority_weight": 0, 00:16:27.658 "high_priority_weight": 0, 00:16:27.658 "nvme_adminq_poll_period_us": 10000, 00:16:27.658 "nvme_ioq_poll_period_us": 0, 00:16:27.658 "io_queue_requests": 0, 00:16:27.658 "delay_cmd_submit": true, 00:16:27.658 "transport_retry_count": 4, 00:16:27.658 "bdev_retry_count": 3, 00:16:27.658 "transport_ack_timeout": 0, 00:16:27.658 "ctrlr_loss_timeout_sec": 0, 00:16:27.658 "reconnect_delay_sec": 0, 00:16:27.658 "fast_io_fail_timeout_sec": 0, 00:16:27.658 "disable_auto_failback": false, 00:16:27.658 "generate_uuids": false, 00:16:27.658 "transport_tos": 0, 00:16:27.658 "nvme_error_stat": false, 00:16:27.658 "rdma_srq_size": 0, 00:16:27.658 "io_path_stat": false, 00:16:27.658 "allow_accel_sequence": false, 00:16:27.658 "rdma_max_cq_size": 0, 00:16:27.658 "rdma_cm_event_timeout_ms": 0, 00:16:27.658 "dhchap_digests": [ 00:16:27.658 "sha256", 00:16:27.658 "sha384", 00:16:27.658 "sha512" 00:16:27.658 ], 00:16:27.658 "dhchap_dhgroups": [ 00:16:27.658 "null", 00:16:27.658 "ffdhe2048", 00:16:27.658 "ffdhe3072", 00:16:27.658 "ffdhe4096", 00:16:27.658 "ffdhe6144", 00:16:27.658 "ffdhe8192" 00:16:27.658 ], 00:16:27.658 "rdma_umr_per_io": false 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_nvme_set_hotplug", 00:16:27.658 "params": { 00:16:27.658 "period_us": 100000, 00:16:27.658 "enable": false 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_malloc_create", 00:16:27.658 "params": { 00:16:27.658 "name": "malloc0", 00:16:27.658 "num_blocks": 8192, 00:16:27.658 "block_size": 4096, 00:16:27.658 "physical_block_size": 4096, 00:16:27.658 "uuid": "e4f04448-6457-4c98-8cb5-b71dc6cb9c05", 00:16:27.658 "optimal_io_boundary": 0, 00:16:27.658 "md_size": 0, 00:16:27.658 "dif_type": 0, 00:16:27.658 "dif_is_head_of_md": false, 00:16:27.658 "dif_pi_format": 0 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "bdev_wait_for_examine" 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "scsi", 00:16:27.658 "config": null 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "scheduler", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "framework_set_scheduler", 00:16:27.658 "params": { 00:16:27.658 "name": "static" 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "vhost_scsi", 00:16:27.658 "config": [] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "vhost_blk", 00:16:27.658 "config": [] 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "subsystem": "ublk", 00:16:27.658 "config": [ 00:16:27.658 { 00:16:27.658 "method": "ublk_create_target", 00:16:27.658 "params": { 00:16:27.658 "cpumask": "1" 00:16:27.658 } 00:16:27.658 }, 00:16:27.658 { 00:16:27.658 "method": "ublk_start_disk", 00:16:27.658 "params": { 00:16:27.658 "bdev_name": "malloc0", 00:16:27.658 "ublk_id": 0, 00:16:27.658 "num_queues": 1, 00:16:27.658 "queue_depth": 128 00:16:27.658 } 00:16:27.658 } 00:16:27.658 ] 00:16:27.658 }, 00:16:27.659 { 00:16:27.659 "subsystem": "nbd", 00:16:27.659 "config": [] 00:16:27.659 }, 00:16:27.659 { 00:16:27.659 "subsystem": "nvmf", 00:16:27.659 "config": [ 00:16:27.659 { 00:16:27.659 "method": "nvmf_set_config", 00:16:27.659 "params": { 00:16:27.659 "discovery_filter": "match_any", 00:16:27.659 "admin_cmd_passthru": { 00:16:27.659 "identify_ctrlr": false 00:16:27.659 }, 00:16:27.659 "dhchap_digests": [ 00:16:27.659 "sha256", 00:16:27.659 "sha384", 00:16:27.659 "sha512" 00:16:27.659 ], 00:16:27.659 "dhchap_dhgroups": [ 00:16:27.659 "null", 00:16:27.659 "ffdhe2048", 00:16:27.659 "ffdhe3072", 00:16:27.659 "ffdhe4096", 00:16:27.659 "ffdhe6144", 00:16:27.659 "ffdhe8192" 00:16:27.659 ] 00:16:27.659 } 00:16:27.659 }, 00:16:27.659 { 00:16:27.659 "method": "nvmf_set_max_subsystems", 00:16:27.659 "params": { 00:16:27.659 "max_subsystems": 1024 00:16:27.659 } 00:16:27.659 }, 00:16:27.659 { 00:16:27.659 "method": "nvmf_set_crdt", 00:16:27.659 "params": { 00:16:27.659 "crdt1": 0, 00:16:27.659 "crdt2": 0, 00:16:27.659 "crdt3": 0 00:16:27.659 } 00:16:27.659 } 00:16:27.659 ] 00:16:27.659 }, 00:16:27.659 { 00:16:27.659 "subsystem": "iscsi", 00:16:27.659 "config": [ 00:16:27.659 { 00:16:27.659 "method": "iscsi_set_options", 00:16:27.659 "params": { 00:16:27.659 "node_base": "iqn.2016-06.io.spdk", 00:16:27.659 "max_sessions": 128, 00:16:27.659 "max_connections_per_session": 2, 00:16:27.659 "max_queue_depth": 64, 00:16:27.659 "default_time2wait": 2, 00:16:27.659 "default_time2retain": 20, 00:16:27.659 "first_burst_length": 8192, 00:16:27.659 "immediate_data": true, 00:16:27.659 "allow_duplicated_isid": false, 00:16:27.659 "error_recovery_level": 0, 00:16:27.659 "nop_timeout": 60, 00:16:27.659 "nop_in_interval": 30, 00:16:27.659 "disable_chap": false, 00:16:27.659 "require_chap": false, 00:16:27.659 "mutual_chap": false, 00:16:27.659 "chap_group": 0, 00:16:27.659 "max_large_datain_per_connection": 64, 00:16:27.659 "max_r2t_per_connection": 4, 00:16:27.659 "pdu_pool_size": 36864, 00:16:27.659 "immediate_data_pool_size": 16384, 00:16:27.659 "data_out_pool_size": 2048 00:16:27.659 } 00:16:27.659 } 00:16:27.659 ] 00:16:27.659 } 00:16:27.659 ] 00:16:27.659 }' 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86656 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86656 ']' 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86656 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.659 22:12:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86656 00:16:27.919 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.919 killing process with pid 86656 00:16:27.919 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.920 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86656' 00:16:27.920 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86656 00:16:27.920 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86656 00:16:28.180 [2024-12-16 22:12:34.300196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.180 [2024-12-16 22:12:34.335966] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.180 [2024-12-16 22:12:34.336128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.180 [2024-12-16 22:12:34.344894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.180 [2024-12-16 22:12:34.344961] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:28.180 [2024-12-16 22:12:34.344970] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:28.180 [2024-12-16 22:12:34.345000] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:28.180 [2024-12-16 22:12:34.345148] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86694 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86694 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86694 ']' 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.751 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.752 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.752 22:12:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:28.752 22:12:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:28.752 "subsystems": [ 00:16:28.752 { 00:16:28.752 "subsystem": "fsdev", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "fsdev_set_opts", 00:16:28.752 "params": { 00:16:28.752 "fsdev_io_pool_size": 65535, 00:16:28.752 "fsdev_io_cache_size": 256 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "keyring", 00:16:28.752 "config": [] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "iobuf", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "iobuf_set_options", 00:16:28.752 "params": { 00:16:28.752 "small_pool_count": 8192, 00:16:28.752 "large_pool_count": 1024, 00:16:28.752 "small_bufsize": 8192, 00:16:28.752 "large_bufsize": 135168, 00:16:28.752 "enable_numa": false 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "sock", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "sock_set_default_impl", 00:16:28.752 "params": { 00:16:28.752 "impl_name": "posix" 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "sock_impl_set_options", 00:16:28.752 "params": { 00:16:28.752 "impl_name": "ssl", 00:16:28.752 "recv_buf_size": 4096, 00:16:28.752 "send_buf_size": 4096, 00:16:28.752 "enable_recv_pipe": true, 00:16:28.752 "enable_quickack": false, 00:16:28.752 "enable_placement_id": 0, 00:16:28.752 "enable_zerocopy_send_server": true, 00:16:28.752 "enable_zerocopy_send_client": false, 00:16:28.752 "zerocopy_threshold": 0, 00:16:28.752 "tls_version": 0, 00:16:28.752 "enable_ktls": false 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "sock_impl_set_options", 00:16:28.752 "params": { 00:16:28.752 "impl_name": "posix", 00:16:28.752 "recv_buf_size": 2097152, 00:16:28.752 "send_buf_size": 2097152, 00:16:28.752 "enable_recv_pipe": true, 00:16:28.752 "enable_quickack": false, 00:16:28.752 "enable_placement_id": 0, 00:16:28.752 "enable_zerocopy_send_server": true, 00:16:28.752 "enable_zerocopy_send_client": false, 00:16:28.752 "zerocopy_threshold": 0, 00:16:28.752 "tls_version": 0, 00:16:28.752 "enable_ktls": false 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "vmd", 00:16:28.752 "config": [] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "accel", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "accel_set_options", 00:16:28.752 "params": { 00:16:28.752 "small_cache_size": 128, 00:16:28.752 "large_cache_size": 16, 00:16:28.752 "task_count": 2048, 00:16:28.752 "sequence_count": 2048, 00:16:28.752 "buf_count": 2048 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "bdev", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "bdev_set_options", 00:16:28.752 "params": { 00:16:28.752 "bdev_io_pool_size": 65535, 00:16:28.752 "bdev_io_cache_size": 256, 00:16:28.752 "bdev_auto_examine": true, 00:16:28.752 "iobuf_small_cache_size": 128, 00:16:28.752 "iobuf_large_cache_size": 16 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_raid_set_options", 00:16:28.752 "params": { 00:16:28.752 "process_window_size_kb": 1024, 00:16:28.752 "process_max_bandwidth_mb_sec": 0 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_iscsi_set_options", 00:16:28.752 "params": { 00:16:28.752 "timeout_sec": 30 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_nvme_set_options", 00:16:28.752 "params": { 00:16:28.752 "action_on_timeout": "none", 00:16:28.752 "timeout_us": 0, 00:16:28.752 "timeout_admin_us": 0, 00:16:28.752 "keep_alive_timeout_ms": 10000, 00:16:28.752 "arbitration_burst": 0, 00:16:28.752 "low_priority_weight": 0, 00:16:28.752 "medium_priority_weight": 0, 00:16:28.752 "high_priority_weight": 0, 00:16:28.752 "nvme_adminq_poll_period_us": 10000, 00:16:28.752 "nvme_ioq_poll_period_us": 0, 00:16:28.752 "io_queue_requests": 0, 00:16:28.752 "delay_cmd_submit": true, 00:16:28.752 "transport_retry_count": 4, 00:16:28.752 "bdev_retry_count": 3, 00:16:28.752 "transport_ack_timeout": 0, 00:16:28.752 "ctrlr_loss_timeout_sec": 0, 00:16:28.752 "reconnect_delay_sec": 0, 00:16:28.752 "fast_io_fail_timeout_sec": 0, 00:16:28.752 "disable_auto_failback": false, 00:16:28.752 "generate_uuids": false, 00:16:28.752 "transport_tos": 0, 00:16:28.752 "nvme_error_stat": false, 00:16:28.752 "rdma_srq_size": 0, 00:16:28.752 "io_path_stat": false, 00:16:28.752 "allow_accel_sequence": false, 00:16:28.752 "rdma_max_cq_size": 0, 00:16:28.752 "rdma_cm_event_timeout_ms": 0, 00:16:28.752 "dhchap_digests": [ 00:16:28.752 "sha256", 00:16:28.752 "sha384", 00:16:28.752 "sha512" 00:16:28.752 ], 00:16:28.752 "dhchap_dhgroups": [ 00:16:28.752 "null", 00:16:28.752 "ffdhe2048", 00:16:28.752 "ffdhe3072", 00:16:28.752 "ffdhe4096", 00:16:28.752 "ffdhe6144", 00:16:28.752 "ffdhe8192" 00:16:28.752 ], 00:16:28.752 "rdma_umr_per_io": false 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_nvme_set_hotplug", 00:16:28.752 "params": { 00:16:28.752 "period_us": 100000, 00:16:28.752 "enable": false 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_malloc_create", 00:16:28.752 "params": { 00:16:28.752 "name": "malloc0", 00:16:28.752 "num_blocks": 8192, 00:16:28.752 "block_size": 4096, 00:16:28.752 "physical_block_size": 4096, 00:16:28.752 "uuid": "e4f04448-6457-4c98-8cb5-b71dc6cb9c05", 00:16:28.752 "optimal_io_boundary": 0, 00:16:28.752 "md_size": 0, 00:16:28.752 "dif_type": 0, 00:16:28.752 "dif_is_head_of_md": false, 00:16:28.752 "dif_pi_format": 0 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "bdev_wait_for_examine" 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "scsi", 00:16:28.752 "config": null 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "scheduler", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "framework_set_scheduler", 00:16:28.752 "params": { 00:16:28.752 "name": "static" 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "vhost_scsi", 00:16:28.752 "config": [] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "vhost_blk", 00:16:28.752 "config": [] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "ublk", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "ublk_create_target", 00:16:28.752 "params": { 00:16:28.752 "cpumask": "1" 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "ublk_start_disk", 00:16:28.752 "params": { 00:16:28.752 "bdev_name": "malloc0", 00:16:28.752 "ublk_id": 0, 00:16:28.752 "num_queues": 1, 00:16:28.752 "queue_depth": 128 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "nbd", 00:16:28.752 "config": [] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "nvmf", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "nvmf_set_config", 00:16:28.752 "params": { 00:16:28.752 "discovery_filter": "match_any", 00:16:28.752 "admin_cmd_passthru": { 00:16:28.752 "identify_ctrlr": false 00:16:28.752 }, 00:16:28.752 "dhchap_digests": [ 00:16:28.752 "sha256", 00:16:28.752 "sha384", 00:16:28.752 "sha512" 00:16:28.752 ], 00:16:28.752 "dhchap_dhgroups": [ 00:16:28.752 "null", 00:16:28.752 "ffdhe2048", 00:16:28.752 "ffdhe3072", 00:16:28.752 "ffdhe40 22:12:34 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:28.752 96", 00:16:28.752 "ffdhe6144", 00:16:28.752 "ffdhe8192" 00:16:28.752 ] 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "nvmf_set_max_subsystems", 00:16:28.752 "params": { 00:16:28.752 "max_subsystems": 1024 00:16:28.752 } 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "method": "nvmf_set_crdt", 00:16:28.752 "params": { 00:16:28.752 "crdt1": 0, 00:16:28.752 "crdt2": 0, 00:16:28.752 "crdt3": 0 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }, 00:16:28.752 { 00:16:28.752 "subsystem": "iscsi", 00:16:28.752 "config": [ 00:16:28.752 { 00:16:28.752 "method": "iscsi_set_options", 00:16:28.752 "params": { 00:16:28.752 "node_base": "iqn.2016-06.io.spdk", 00:16:28.752 "max_sessions": 128, 00:16:28.752 "max_connections_per_session": 2, 00:16:28.752 "max_queue_depth": 64, 00:16:28.752 "default_time2wait": 2, 00:16:28.752 "default_time2retain": 20, 00:16:28.752 "first_burst_length": 8192, 00:16:28.752 "immediate_data": true, 00:16:28.752 "allow_duplicated_isid": false, 00:16:28.752 "error_recovery_level": 0, 00:16:28.752 "nop_timeout": 60, 00:16:28.752 "nop_in_interval": 30, 00:16:28.752 "disable_chap": false, 00:16:28.752 "require_chap": false, 00:16:28.752 "mutual_chap": false, 00:16:28.752 "chap_group": 0, 00:16:28.752 "max_large_datain_per_connection": 64, 00:16:28.752 "max_r2t_per_connection": 4, 00:16:28.752 "pdu_pool_size": 36864, 00:16:28.752 "immediate_data_pool_size": 16384, 00:16:28.752 "data_out_pool_size": 2048 00:16:28.752 } 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 } 00:16:28.752 ] 00:16:28.752 }' 00:16:28.752 [2024-12-16 22:12:34.874491] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:28.752 [2024-12-16 22:12:34.874900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86694 ] 00:16:28.752 [2024-12-16 22:12:35.038224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.752 [2024-12-16 22:12:35.067092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.324 [2024-12-16 22:12:35.456859] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:29.324 [2024-12-16 22:12:35.457240] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:29.324 [2024-12-16 22:12:35.464997] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:29.324 [2024-12-16 22:12:35.465094] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:29.324 [2024-12-16 22:12:35.465103] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:29.324 [2024-12-16 22:12:35.465113] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:29.324 [2024-12-16 22:12:35.473948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:29.324 [2024-12-16 22:12:35.473982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:29.324 [2024-12-16 22:12:35.480873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:29.324 [2024-12-16 22:12:35.480987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:29.324 [2024-12-16 22:12:35.497855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86694 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86694 ']' 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86694 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86694 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.585 killing process with pid 86694 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86694' 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86694 00:16:29.585 22:12:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86694 00:16:29.847 [2024-12-16 22:12:36.080519] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:29.847 [2024-12-16 22:12:36.117875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:29.847 [2024-12-16 22:12:36.118027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:29.847 [2024-12-16 22:12:36.127858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:29.847 [2024-12-16 22:12:36.127936] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:29.847 [2024-12-16 22:12:36.127958] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:29.847 [2024-12-16 22:12:36.127991] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:29.847 [2024-12-16 22:12:36.128143] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:30.420 22:12:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:30.420 00:16:30.420 real 0m3.898s 00:16:30.420 user 0m2.693s 00:16:30.420 sys 0m1.897s 00:16:30.420 ************************************ 00:16:30.420 END TEST test_save_ublk_config 00:16:30.420 ************************************ 00:16:30.420 22:12:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.420 22:12:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:30.420 22:12:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86745 00:16:30.420 22:12:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.420 22:12:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86745 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@835 -- # '[' -z 86745 ']' 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.420 22:12:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.420 22:12:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:30.420 [2024-12-16 22:12:36.711520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:30.420 [2024-12-16 22:12:36.711680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86745 ] 00:16:30.681 [2024-12-16 22:12:36.867268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.681 [2024-12-16 22:12:36.897718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.681 [2024-12-16 22:12:36.897815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.254 22:12:37 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.254 22:12:37 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:31.254 22:12:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:31.254 22:12:37 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.254 22:12:37 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.254 22:12:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.254 ************************************ 00:16:31.254 START TEST test_create_ublk 00:16:31.254 ************************************ 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:31.254 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.254 [2024-12-16 22:12:37.585863] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:31.254 [2024-12-16 22:12:37.587615] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.254 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:31.254 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.254 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.516 [2024-12-16 22:12:37.674045] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:31.516 [2024-12-16 22:12:37.674483] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:31.516 [2024-12-16 22:12:37.674500] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:31.516 [2024-12-16 22:12:37.674511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:31.516 [2024-12-16 22:12:37.682235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:31.516 [2024-12-16 22:12:37.682274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:31.516 [2024-12-16 22:12:37.689908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:31.516 [2024-12-16 22:12:37.690606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:31.516 [2024-12-16 22:12:37.709906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.516 22:12:37 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:31.516 { 00:16:31.516 "ublk_device": "/dev/ublkb0", 00:16:31.516 "id": 0, 00:16:31.516 "queue_depth": 512, 00:16:31.516 "num_queues": 4, 00:16:31.516 "bdev_name": "Malloc0" 00:16:31.516 } 00:16:31.516 ]' 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:31.516 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:31.778 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:31.778 22:12:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:31.778 22:12:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:31.778 fio: verification read phase will never start because write phase uses all of runtime 00:16:31.778 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:31.778 fio-3.35 00:16:31.778 Starting 1 process 00:16:43.991 00:16:43.991 fio_test: (groupid=0, jobs=1): err= 0: pid=86789: Mon Dec 16 22:12:48 2024 00:16:43.991 write: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(585MiB/10001msec); 0 zone resets 00:16:43.991 clat (usec): min=39, max=4087, avg=66.11, stdev=92.00 00:16:43.991 lat (usec): min=39, max=4087, avg=66.50, stdev=92.04 00:16:43.991 clat percentiles (usec): 00:16:43.991 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:16:43.991 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:16:43.991 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 70], 95.00th=[ 92], 00:16:43.991 | 99.00th=[ 123], 99.50th=[ 223], 99.90th=[ 1795], 99.95th=[ 2671], 00:16:43.991 | 99.99th=[ 3589] 00:16:43.991 bw ( KiB/s): min=34384, max=64056, per=99.82%, avg=59757.89, stdev=8247.50, samples=19 00:16:43.991 iops : min= 8596, max=16014, avg=14939.47, stdev=2061.87, samples=19 00:16:43.991 lat (usec) : 50=1.00%, 100=95.24%, 250=3.40%, 500=0.19%, 750=0.01% 00:16:43.991 lat (usec) : 1000=0.01% 00:16:43.991 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:16:43.991 cpu : usr=1.88%, sys=14.03%, ctx=149684, majf=0, minf=795 00:16:43.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.991 issued rwts: total=0,149679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.991 00:16:43.991 Run status group 0 (all jobs): 00:16:43.991 WRITE: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=585MiB (613MB), run=10001-10001msec 00:16:43.991 00:16:43.991 Disk stats (read/write): 00:16:43.991 ublkb0: ios=0/148040, merge=0/0, ticks=0/8139, in_queue=8140, util=99.09% 00:16:43.991 22:12:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 [2024-12-16 22:12:48.140931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.991 [2024-12-16 22:12:48.174356] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.991 [2024-12-16 22:12:48.175380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.991 [2024-12-16 22:12:48.184874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.991 [2024-12-16 22:12:48.185183] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.991 [2024-12-16 22:12:48.185239] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 [2024-12-16 22:12:48.200934] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:43.991 request: 00:16:43.991 { 00:16:43.991 "ublk_id": 0, 00:16:43.991 "method": "ublk_stop_disk", 00:16:43.991 "req_id": 1 00:16:43.991 } 00:16:43.991 Got JSON-RPC error response 00:16:43.991 response: 00:16:43.991 { 00:16:43.991 "code": -19, 00:16:43.991 "message": "No such device" 00:16:43.991 } 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.991 22:12:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 [2024-12-16 22:12:48.216913] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.991 [2024-12-16 22:12:48.219172] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.991 [2024-12-16 22:12:48.219199] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:43.991 ************************************ 00:16:43.991 END TEST test_create_ublk 00:16:43.991 ************************************ 00:16:43.991 22:12:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:43.991 00:16:43.991 real 0m10.803s 00:16:43.991 user 0m0.489s 00:16:43.991 sys 0m1.486s 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 22:12:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:43.991 22:12:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.991 22:12:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.991 22:12:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 ************************************ 00:16:43.991 START TEST test_create_multi_ublk 00:16:43.991 ************************************ 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 [2024-12-16 22:12:48.431849] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.991 [2024-12-16 22:12:48.432715] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 [2024-12-16 22:12:48.503964] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:43.992 [2024-12-16 22:12:48.504252] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:43.992 [2024-12-16 22:12:48.504265] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:43.992 [2024-12-16 22:12:48.504270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.992 [2024-12-16 22:12:48.515895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.992 [2024-12-16 22:12:48.515912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.992 [2024-12-16 22:12:48.525868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.992 [2024-12-16 22:12:48.526351] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:43.992 [2024-12-16 22:12:48.575860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 [2024-12-16 22:12:48.659947] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:43.992 [2024-12-16 22:12:48.660234] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:43.992 [2024-12-16 22:12:48.660245] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:43.992 [2024-12-16 22:12:48.660251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.992 [2024-12-16 22:12:48.671869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.992 [2024-12-16 22:12:48.671887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.992 [2024-12-16 22:12:48.683859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.992 [2024-12-16 22:12:48.684336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:43.992 [2024-12-16 22:12:48.708869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 [2024-12-16 22:12:48.791955] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:43.992 [2024-12-16 22:12:48.792245] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:43.992 [2024-12-16 22:12:48.792258] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:43.992 [2024-12-16 22:12:48.792262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.992 [2024-12-16 22:12:48.803877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.992 [2024-12-16 22:12:48.803895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.992 [2024-12-16 22:12:48.815857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.992 [2024-12-16 22:12:48.816327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:43.992 [2024-12-16 22:12:48.828855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 [2024-12-16 22:12:48.911946] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:43.992 [2024-12-16 22:12:48.912239] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:43.992 [2024-12-16 22:12:48.912250] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:43.992 [2024-12-16 22:12:48.912256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.992 [2024-12-16 22:12:48.923885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.992 [2024-12-16 22:12:48.923907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.992 [2024-12-16 22:12:48.935864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.992 [2024-12-16 22:12:48.936346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:43.992 [2024-12-16 22:12:48.948886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:43.992 { 00:16:43.992 "ublk_device": "/dev/ublkb0", 00:16:43.992 "id": 0, 00:16:43.992 "queue_depth": 512, 00:16:43.992 "num_queues": 4, 00:16:43.992 "bdev_name": "Malloc0" 00:16:43.992 }, 00:16:43.992 { 00:16:43.992 "ublk_device": "/dev/ublkb1", 00:16:43.992 "id": 1, 00:16:43.992 "queue_depth": 512, 00:16:43.992 "num_queues": 4, 00:16:43.992 "bdev_name": "Malloc1" 00:16:43.992 }, 00:16:43.992 { 00:16:43.992 "ublk_device": "/dev/ublkb2", 00:16:43.992 "id": 2, 00:16:43.992 "queue_depth": 512, 00:16:43.992 "num_queues": 4, 00:16:43.992 "bdev_name": "Malloc2" 00:16:43.992 }, 00:16:43.992 { 00:16:43.992 "ublk_device": "/dev/ublkb3", 00:16:43.992 "id": 3, 00:16:43.992 "queue_depth": 512, 00:16:43.992 "num_queues": 4, 00:16:43.992 "bdev_name": "Malloc3" 00:16:43.992 } 00:16:43.992 ]' 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:43.992 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 [2024-12-16 22:12:49.607939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.993 [2024-12-16 22:12:49.655911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.993 [2024-12-16 22:12:49.656737] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.993 [2024-12-16 22:12:49.663869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.993 [2024-12-16 22:12:49.664122] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.993 [2024-12-16 22:12:49.664133] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 [2024-12-16 22:12:49.679925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.993 [2024-12-16 22:12:49.720264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.993 [2024-12-16 22:12:49.721448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.993 [2024-12-16 22:12:49.727859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.993 [2024-12-16 22:12:49.728105] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:43.993 [2024-12-16 22:12:49.728115] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 [2024-12-16 22:12:49.741914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.993 [2024-12-16 22:12:49.784376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.993 [2024-12-16 22:12:49.785402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.993 [2024-12-16 22:12:49.791858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.993 [2024-12-16 22:12:49.792088] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:43.993 [2024-12-16 22:12:49.792098] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 [2024-12-16 22:12:49.805918] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.993 [2024-12-16 22:12:49.849243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.993 [2024-12-16 22:12:49.850260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.993 [2024-12-16 22:12:49.855866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.993 [2024-12-16 22:12:49.856100] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:43.993 [2024-12-16 22:12:49.856105] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:43.993 [2024-12-16 22:12:50.055927] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.993 [2024-12-16 22:12:50.057363] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.993 [2024-12-16 22:12:50.057391] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:43.993 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:44.252 00:16:44.252 real 0m1.983s 00:16:44.252 user 0m0.798s 00:16:44.252 sys 0m0.139s 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.252 ************************************ 00:16:44.252 END TEST test_create_multi_ublk 00:16:44.252 ************************************ 00:16:44.252 22:12:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.252 22:12:50 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:44.252 22:12:50 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:44.252 22:12:50 ublk -- ublk/ublk.sh@130 -- # killprocess 86745 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@954 -- # '[' -z 86745 ']' 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@958 -- # kill -0 86745 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@959 -- # uname 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86745 00:16:44.252 killing process with pid 86745 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86745' 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@973 -- # kill 86745 00:16:44.252 22:12:50 ublk -- common/autotest_common.sh@978 -- # wait 86745 00:16:44.510 [2024-12-16 22:12:50.611414] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:44.510 [2024-12-16 22:12:50.611474] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:44.769 00:16:44.769 real 0m18.393s 00:16:44.769 user 0m27.987s 00:16:44.769 sys 0m8.097s 00:16:44.769 22:12:50 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.769 ************************************ 00:16:44.769 END TEST ublk 00:16:44.769 ************************************ 00:16:44.769 22:12:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:44.769 22:12:50 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:44.769 22:12:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:44.769 22:12:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.769 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.769 ************************************ 00:16:44.769 START TEST ublk_recovery 00:16:44.769 ************************************ 00:16:44.769 22:12:50 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:44.769 * Looking for test storage... 00:16:44.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:44.769 22:12:50 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:44.769 22:12:50 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:44.769 22:12:50 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.769 22:12:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:44.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.769 --rc genhtml_branch_coverage=1 00:16:44.769 --rc genhtml_function_coverage=1 00:16:44.769 --rc genhtml_legend=1 00:16:44.769 --rc geninfo_all_blocks=1 00:16:44.769 --rc geninfo_unexecuted_blocks=1 00:16:44.769 00:16:44.769 ' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:44.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.769 --rc genhtml_branch_coverage=1 00:16:44.769 --rc genhtml_function_coverage=1 00:16:44.769 --rc genhtml_legend=1 00:16:44.769 --rc geninfo_all_blocks=1 00:16:44.769 --rc geninfo_unexecuted_blocks=1 00:16:44.769 00:16:44.769 ' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:44.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.769 --rc genhtml_branch_coverage=1 00:16:44.769 --rc genhtml_function_coverage=1 00:16:44.769 --rc genhtml_legend=1 00:16:44.769 --rc geninfo_all_blocks=1 00:16:44.769 --rc geninfo_unexecuted_blocks=1 00:16:44.769 00:16:44.769 ' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:44.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.769 --rc genhtml_branch_coverage=1 00:16:44.769 --rc genhtml_function_coverage=1 00:16:44.769 --rc genhtml_legend=1 00:16:44.769 --rc geninfo_all_blocks=1 00:16:44.769 --rc geninfo_unexecuted_blocks=1 00:16:44.769 00:16:44.769 ' 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:44.769 22:12:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:44.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87114 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87114 00:16:44.769 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87114 ']' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.769 22:12:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.030 [2024-12-16 22:12:51.151184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:45.030 [2024-12-16 22:12:51.151303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87114 ] 00:16:45.030 [2024-12-16 22:12:51.295655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:45.030 [2024-12-16 22:12:51.313661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.030 [2024-12-16 22:12:51.313700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:45.602 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.602 [2024-12-16 22:12:51.938852] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.602 [2024-12-16 22:12:51.939776] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.602 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.602 22:12:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.863 malloc0 00:16:45.863 22:12:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.863 22:12:51 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:45.863 22:12:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.864 22:12:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:45.864 [2024-12-16 22:12:51.970950] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:45.864 [2024-12-16 22:12:51.971054] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:45.864 [2024-12-16 22:12:51.971061] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:45.864 [2024-12-16 22:12:51.971068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.864 [2024-12-16 22:12:51.979933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.864 [2024-12-16 22:12:51.979954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.864 [2024-12-16 22:12:51.986864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.864 [2024-12-16 22:12:51.986970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:45.864 [2024-12-16 22:12:52.001863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.864 1 00:16:45.864 22:12:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.864 22:12:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:46.798 22:12:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87142 00:16:46.798 22:12:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:46.798 22:12:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:46.798 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:46.798 fio-3.35 00:16:46.798 Starting 1 process 00:16:52.065 22:12:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87114 00:16:52.065 22:12:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:57.356 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87114 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:57.356 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:57.356 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87256 00:16:57.356 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:57.356 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87256 00:16:57.356 22:13:03 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87256 ']' 00:16:57.356 22:13:03 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.356 22:13:03 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.356 22:13:03 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.357 22:13:03 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.357 22:13:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.357 [2024-12-16 22:13:03.093227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:57.357 [2024-12-16 22:13:03.093471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87256 ] 00:16:57.357 [2024-12-16 22:13:03.251994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:57.357 [2024-12-16 22:13:03.273857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.357 [2024-12-16 22:13:03.273909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:57.615 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.615 [2024-12-16 22:13:03.949857] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:57.615 [2024-12-16 22:13:03.950919] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.615 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.615 22:13:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.873 malloc0 00:16:57.873 22:13:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.873 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:57.873 22:13:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.873 22:13:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.873 [2024-12-16 22:13:03.982265] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:57.873 [2024-12-16 22:13:03.982301] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:57.873 [2024-12-16 22:13:03.982309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.873 [2024-12-16 22:13:03.989891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.873 [2024-12-16 22:13:03.989912] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:57.873 [2024-12-16 22:13:03.989924] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:57.873 [2024-12-16 22:13:03.990006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:57.873 1 00:16:57.873 22:13:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.873 22:13:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87142 00:16:57.873 [2024-12-16 22:13:03.997867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:57.873 [2024-12-16 22:13:04.001259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:57.873 [2024-12-16 22:13:04.005068] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:57.873 [2024-12-16 22:13:04.005086] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:54.163 00:17:54.163 fio_test: (groupid=0, jobs=1): err= 0: pid=87146: Mon Dec 16 22:13:53 2024 00:17:54.163 read: IOPS=27.9k, BW=109MiB/s (114MB/s)(6532MiB/60002msec) 00:17:54.163 slat (nsec): min=912, max=878337, avg=4882.93, stdev=1966.15 00:17:54.164 clat (usec): min=983, max=5997.5k, avg=2253.81, stdev=37086.30 00:17:54.164 lat (usec): min=990, max=5997.5k, avg=2258.69, stdev=37086.30 00:17:54.164 clat percentiles (usec): 00:17:54.164 | 1.00th=[ 1713], 5.00th=[ 1827], 10.00th=[ 1844], 20.00th=[ 1876], 00:17:54.164 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1909], 60.00th=[ 1926], 00:17:54.164 | 70.00th=[ 1942], 80.00th=[ 1958], 90.00th=[ 2008], 95.00th=[ 2737], 00:17:54.164 | 99.00th=[ 4621], 99.50th=[ 5342], 99.90th=[ 6521], 99.95th=[ 7177], 00:17:54.164 | 99.99th=[13304] 00:17:54.164 bw ( KiB/s): min=20984, max=128536, per=100.00%, avg=122788.15, stdev=14099.60, samples=108 00:17:54.164 iops : min= 5246, max=32134, avg=30697.04, stdev=3524.90, samples=108 00:17:54.164 write: IOPS=27.8k, BW=109MiB/s (114MB/s)(6527MiB/60002msec); 0 zone resets 00:17:54.164 slat (nsec): min=940, max=796451, avg=4911.33, stdev=1656.45 00:17:54.164 clat (usec): min=1003, max=5997.4k, avg=2329.90, stdev=37101.19 00:17:54.164 lat (usec): min=1008, max=5997.4k, avg=2334.81, stdev=37101.19 00:17:54.164 clat percentiles (usec): 00:17:54.164 | 1.00th=[ 1745], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 1958], 00:17:54.164 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:17:54.164 | 70.00th=[ 2040], 80.00th=[ 2057], 90.00th=[ 2089], 95.00th=[ 2638], 00:17:54.164 | 99.00th=[ 4621], 99.50th=[ 5276], 99.90th=[ 6587], 99.95th=[ 7177], 00:17:54.164 | 99.99th=[13435] 00:17:54.164 bw ( KiB/s): min=21016, max=127640, per=100.00%, avg=122685.70, stdev=14160.37, samples=108 00:17:54.164 iops : min= 5254, max=31910, avg=30671.43, stdev=3540.09, samples=108 00:17:54.164 lat (usec) : 1000=0.01% 00:17:54.164 lat (msec) : 2=67.80%, 4=30.00%, 10=2.18%, 20=0.02%, >=2000=0.01% 00:17:54.164 cpu : usr=6.35%, sys=28.01%, ctx=110307, majf=0, minf=13 00:17:54.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:54.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.164 issued rwts: total=1672288,1670962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.164 00:17:54.164 Run status group 0 (all jobs): 00:17:54.164 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=6532MiB (6850MB), run=60002-60002msec 00:17:54.164 WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=6527MiB (6844MB), run=60002-60002msec 00:17:54.164 00:17:54.164 Disk stats (read/write): 00:17:54.164 ublkb1: ios=1668939/1667613, merge=0/0, ticks=3674114/3662664, in_queue=7336779, util=99.89% 00:17:54.164 22:13:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 [2024-12-16 22:13:53.262592] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:54.164 [2024-12-16 22:13:53.308877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:54.164 [2024-12-16 22:13:53.309031] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:54.164 [2024-12-16 22:13:53.316868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:54.164 [2024-12-16 22:13:53.316960] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:54.164 [2024-12-16 22:13:53.316967] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.164 22:13:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 [2024-12-16 22:13:53.326010] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:54.164 [2024-12-16 22:13:53.332270] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:54.164 [2024-12-16 22:13:53.332302] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.164 22:13:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:54.164 22:13:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:54.164 22:13:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87256 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87256 ']' 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87256 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87256 00:17:54.164 killing process with pid 87256 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87256' 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87256 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87256 00:17:54.164 [2024-12-16 22:13:53.524171] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:54.164 [2024-12-16 22:13:53.524231] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:54.164 ************************************ 00:17:54.164 END TEST ublk_recovery 00:17:54.164 ************************************ 00:17:54.164 00:17:54.164 real 1m2.878s 00:17:54.164 user 1m41.415s 00:17:54.164 sys 0m34.388s 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.164 22:13:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 22:13:53 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:54.164 22:13:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:54.164 22:13:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.164 22:13:53 -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 22:13:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:54.164 22:13:53 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.164 22:13:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.164 22:13:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.164 22:13:53 -- common/autotest_common.sh@10 -- # set +x 00:17:54.164 ************************************ 00:17:54.164 START TEST ftl 00:17:54.164 ************************************ 00:17:54.164 22:13:53 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.164 * Looking for test storage... 00:17:54.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.164 22:13:53 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.164 22:13:53 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.164 22:13:53 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.164 22:13:54 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.164 22:13:54 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.164 22:13:54 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.164 22:13:54 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.164 22:13:54 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.164 22:13:54 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.164 22:13:54 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:54.164 22:13:54 ftl -- scripts/common.sh@345 -- # : 1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.164 22:13:54 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.164 22:13:54 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@353 -- # local d=1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.164 22:13:54 ftl -- scripts/common.sh@355 -- # echo 1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.164 22:13:54 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@353 -- # local d=2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.164 22:13:54 ftl -- scripts/common.sh@355 -- # echo 2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.164 22:13:54 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.164 22:13:54 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.164 22:13:54 ftl -- scripts/common.sh@368 -- # return 0 00:17:54.164 22:13:54 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.164 22:13:54 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.164 --rc genhtml_branch_coverage=1 00:17:54.164 --rc genhtml_function_coverage=1 00:17:54.164 --rc genhtml_legend=1 00:17:54.164 --rc geninfo_all_blocks=1 00:17:54.164 --rc geninfo_unexecuted_blocks=1 00:17:54.164 00:17:54.164 ' 00:17:54.164 22:13:54 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.165 --rc genhtml_branch_coverage=1 00:17:54.165 --rc genhtml_function_coverage=1 00:17:54.165 --rc genhtml_legend=1 00:17:54.165 --rc geninfo_all_blocks=1 00:17:54.165 --rc geninfo_unexecuted_blocks=1 00:17:54.165 00:17:54.165 ' 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.165 --rc genhtml_branch_coverage=1 00:17:54.165 --rc genhtml_function_coverage=1 00:17:54.165 --rc genhtml_legend=1 00:17:54.165 --rc geninfo_all_blocks=1 00:17:54.165 --rc geninfo_unexecuted_blocks=1 00:17:54.165 00:17:54.165 ' 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.165 --rc genhtml_branch_coverage=1 00:17:54.165 --rc genhtml_function_coverage=1 00:17:54.165 --rc genhtml_legend=1 00:17:54.165 --rc geninfo_all_blocks=1 00:17:54.165 --rc geninfo_unexecuted_blocks=1 00:17:54.165 00:17:54.165 ' 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:54.165 22:13:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.165 22:13:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.165 22:13:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.165 22:13:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:54.165 22:13:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:54.165 22:13:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.165 22:13:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.165 22:13:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.165 22:13:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.165 22:13:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.165 22:13:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:54.165 22:13:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:54.165 22:13:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.165 22:13:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.165 22:13:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:54.165 22:13:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.165 22:13:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.165 22:13:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.165 22:13:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.165 22:13:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:54.165 22:13:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:54.165 22:13:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.165 22:13:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:54.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:54.165 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.165 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.165 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.165 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88047 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88047 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@835 -- # '[' -z 88047 ']' 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.165 22:13:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:54.165 22:13:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:54.165 [2024-12-16 22:13:54.567997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:54.165 [2024-12-16 22:13:54.568305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88047 ] 00:17:54.165 [2024-12-16 22:13:54.727632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.165 [2024-12-16 22:13:54.754754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.165 22:13:55 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.165 22:13:55 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:54.165 22:13:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:54.165 22:13:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:54.165 22:13:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:54.165 22:13:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@50 -- # break 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@63 -- # break 00:17:54.165 22:13:56 ftl -- ftl/ftl.sh@66 -- # killprocess 88047 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@954 -- # '[' -z 88047 ']' 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@958 -- # kill -0 88047 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@959 -- # uname 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88047 00:17:54.165 killing process with pid 88047 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88047' 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@973 -- # kill 88047 00:17:54.165 22:13:56 ftl -- common/autotest_common.sh@978 -- # wait 88047 00:17:54.165 22:13:57 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:54.165 22:13:57 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:54.165 22:13:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:54.165 22:13:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.165 22:13:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:54.165 ************************************ 00:17:54.165 START TEST ftl_fio_basic 00:17:54.165 ************************************ 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:54.165 * Looking for test storage... 00:17:54.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:54.165 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.166 --rc genhtml_branch_coverage=1 00:17:54.166 --rc genhtml_function_coverage=1 00:17:54.166 --rc genhtml_legend=1 00:17:54.166 --rc geninfo_all_blocks=1 00:17:54.166 --rc geninfo_unexecuted_blocks=1 00:17:54.166 00:17:54.166 ' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.166 --rc genhtml_branch_coverage=1 00:17:54.166 --rc genhtml_function_coverage=1 00:17:54.166 --rc genhtml_legend=1 00:17:54.166 --rc geninfo_all_blocks=1 00:17:54.166 --rc geninfo_unexecuted_blocks=1 00:17:54.166 00:17:54.166 ' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.166 --rc genhtml_branch_coverage=1 00:17:54.166 --rc genhtml_function_coverage=1 00:17:54.166 --rc genhtml_legend=1 00:17:54.166 --rc geninfo_all_blocks=1 00:17:54.166 --rc geninfo_unexecuted_blocks=1 00:17:54.166 00:17:54.166 ' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.166 --rc genhtml_branch_coverage=1 00:17:54.166 --rc genhtml_function_coverage=1 00:17:54.166 --rc genhtml_legend=1 00:17:54.166 --rc geninfo_all_blocks=1 00:17:54.166 --rc geninfo_unexecuted_blocks=1 00:17:54.166 00:17:54.166 ' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88168 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88168 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 88168 ']' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.166 22:13:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:54.166 [2024-12-16 22:13:57.303554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:54.166 [2024-12-16 22:13:57.304218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88168 ] 00:17:54.166 [2024-12-16 22:13:57.459225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.166 [2024-12-16 22:13:57.478668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.166 [2024-12-16 22:13:57.478912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.166 [2024-12-16 22:13:57.478931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:54.166 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.166 { 00:17:54.166 "name": "nvme0n1", 00:17:54.166 "aliases": [ 00:17:54.166 "8dd65ce5-a7ce-45a4-bbba-c4b5d486a2cd" 00:17:54.166 ], 00:17:54.166 "product_name": "NVMe disk", 00:17:54.166 "block_size": 4096, 00:17:54.166 "num_blocks": 1310720, 00:17:54.166 "uuid": "8dd65ce5-a7ce-45a4-bbba-c4b5d486a2cd", 00:17:54.166 "numa_id": -1, 00:17:54.166 "assigned_rate_limits": { 00:17:54.166 "rw_ios_per_sec": 0, 00:17:54.166 "rw_mbytes_per_sec": 0, 00:17:54.166 "r_mbytes_per_sec": 0, 00:17:54.166 "w_mbytes_per_sec": 0 00:17:54.166 }, 00:17:54.166 "claimed": false, 00:17:54.166 "zoned": false, 00:17:54.166 "supported_io_types": { 00:17:54.166 "read": true, 00:17:54.166 "write": true, 00:17:54.166 "unmap": true, 00:17:54.166 "flush": true, 00:17:54.166 "reset": true, 00:17:54.166 "nvme_admin": true, 00:17:54.167 "nvme_io": true, 00:17:54.167 "nvme_io_md": false, 00:17:54.167 "write_zeroes": true, 00:17:54.167 "zcopy": false, 00:17:54.167 "get_zone_info": false, 00:17:54.167 "zone_management": false, 00:17:54.167 "zone_append": false, 00:17:54.167 "compare": true, 00:17:54.167 "compare_and_write": false, 00:17:54.167 "abort": true, 00:17:54.167 "seek_hole": false, 00:17:54.167 "seek_data": false, 00:17:54.167 "copy": true, 00:17:54.167 "nvme_iov_md": false 00:17:54.167 }, 00:17:54.167 "driver_specific": { 00:17:54.167 "nvme": [ 00:17:54.167 { 00:17:54.167 "pci_address": "0000:00:11.0", 00:17:54.167 "trid": { 00:17:54.167 "trtype": "PCIe", 00:17:54.167 "traddr": "0000:00:11.0" 00:17:54.167 }, 00:17:54.167 "ctrlr_data": { 00:17:54.167 "cntlid": 0, 00:17:54.167 "vendor_id": "0x1b36", 00:17:54.167 "model_number": "QEMU NVMe Ctrl", 00:17:54.167 "serial_number": "12341", 00:17:54.167 "firmware_revision": "8.0.0", 00:17:54.167 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:54.167 "oacs": { 00:17:54.167 "security": 0, 00:17:54.167 "format": 1, 00:17:54.167 "firmware": 0, 00:17:54.167 "ns_manage": 1 00:17:54.167 }, 00:17:54.167 "multi_ctrlr": false, 00:17:54.167 "ana_reporting": false 00:17:54.167 }, 00:17:54.167 "vs": { 00:17:54.167 "nvme_version": "1.4" 00:17:54.167 }, 00:17:54.167 "ns_data": { 00:17:54.167 "id": 1, 00:17:54.167 "can_share": false 00:17:54.167 } 00:17:54.167 } 00:17:54.167 ], 00:17:54.167 "mp_policy": "active_passive" 00:17:54.167 } 00:17:54.167 } 00:17:54.167 ]' 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:54.167 22:13:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=7eea98e2-bc59-433f-896e-ef12518279d4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7eea98e2-bc59-433f-896e-ef12518279d4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.167 { 00:17:54.167 "name": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.167 "aliases": [ 00:17:54.167 "lvs/nvme0n1p0" 00:17:54.167 ], 00:17:54.167 "product_name": "Logical Volume", 00:17:54.167 "block_size": 4096, 00:17:54.167 "num_blocks": 26476544, 00:17:54.167 "uuid": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.167 "assigned_rate_limits": { 00:17:54.167 "rw_ios_per_sec": 0, 00:17:54.167 "rw_mbytes_per_sec": 0, 00:17:54.167 "r_mbytes_per_sec": 0, 00:17:54.167 "w_mbytes_per_sec": 0 00:17:54.167 }, 00:17:54.167 "claimed": false, 00:17:54.167 "zoned": false, 00:17:54.167 "supported_io_types": { 00:17:54.167 "read": true, 00:17:54.167 "write": true, 00:17:54.167 "unmap": true, 00:17:54.167 "flush": false, 00:17:54.167 "reset": true, 00:17:54.167 "nvme_admin": false, 00:17:54.167 "nvme_io": false, 00:17:54.167 "nvme_io_md": false, 00:17:54.167 "write_zeroes": true, 00:17:54.167 "zcopy": false, 00:17:54.167 "get_zone_info": false, 00:17:54.167 "zone_management": false, 00:17:54.167 "zone_append": false, 00:17:54.167 "compare": false, 00:17:54.167 "compare_and_write": false, 00:17:54.167 "abort": false, 00:17:54.167 "seek_hole": true, 00:17:54.167 "seek_data": true, 00:17:54.167 "copy": false, 00:17:54.167 "nvme_iov_md": false 00:17:54.167 }, 00:17:54.167 "driver_specific": { 00:17:54.167 "lvol": { 00:17:54.167 "lvol_store_uuid": "7eea98e2-bc59-433f-896e-ef12518279d4", 00:17:54.167 "base_bdev": "nvme0n1", 00:17:54.167 "thin_provision": true, 00:17:54.167 "num_allocated_clusters": 0, 00:17:54.167 "snapshot": false, 00:17:54.167 "clone": false, 00:17:54.167 "esnap_clone": false 00:17:54.167 } 00:17:54.167 } 00:17:54.167 } 00:17:54.167 ]' 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.167 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.167 { 00:17:54.167 "name": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.167 "aliases": [ 00:17:54.167 "lvs/nvme0n1p0" 00:17:54.167 ], 00:17:54.167 "product_name": "Logical Volume", 00:17:54.167 "block_size": 4096, 00:17:54.167 "num_blocks": 26476544, 00:17:54.167 "uuid": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.167 "assigned_rate_limits": { 00:17:54.167 "rw_ios_per_sec": 0, 00:17:54.167 "rw_mbytes_per_sec": 0, 00:17:54.167 "r_mbytes_per_sec": 0, 00:17:54.167 "w_mbytes_per_sec": 0 00:17:54.167 }, 00:17:54.167 "claimed": false, 00:17:54.167 "zoned": false, 00:17:54.167 "supported_io_types": { 00:17:54.167 "read": true, 00:17:54.167 "write": true, 00:17:54.167 "unmap": true, 00:17:54.167 "flush": false, 00:17:54.167 "reset": true, 00:17:54.167 "nvme_admin": false, 00:17:54.167 "nvme_io": false, 00:17:54.167 "nvme_io_md": false, 00:17:54.167 "write_zeroes": true, 00:17:54.167 "zcopy": false, 00:17:54.167 "get_zone_info": false, 00:17:54.167 "zone_management": false, 00:17:54.167 "zone_append": false, 00:17:54.167 "compare": false, 00:17:54.167 "compare_and_write": false, 00:17:54.167 "abort": false, 00:17:54.167 "seek_hole": true, 00:17:54.167 "seek_data": true, 00:17:54.167 "copy": false, 00:17:54.167 "nvme_iov_md": false 00:17:54.167 }, 00:17:54.167 "driver_specific": { 00:17:54.167 "lvol": { 00:17:54.167 "lvol_store_uuid": "7eea98e2-bc59-433f-896e-ef12518279d4", 00:17:54.167 "base_bdev": "nvme0n1", 00:17:54.167 "thin_provision": true, 00:17:54.168 "num_allocated_clusters": 0, 00:17:54.168 "snapshot": false, 00:17:54.168 "clone": false, 00:17:54.168 "esnap_clone": false 00:17:54.168 } 00:17:54.168 } 00:17:54.168 } 00:17:54.168 ]' 00:17:54.168 22:13:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:54.168 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ceab4d3b-8585-4b2c-b265-de78306bb5a4 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.168 { 00:17:54.168 "name": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.168 "aliases": [ 00:17:54.168 "lvs/nvme0n1p0" 00:17:54.168 ], 00:17:54.168 "product_name": "Logical Volume", 00:17:54.168 "block_size": 4096, 00:17:54.168 "num_blocks": 26476544, 00:17:54.168 "uuid": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:54.168 "assigned_rate_limits": { 00:17:54.168 "rw_ios_per_sec": 0, 00:17:54.168 "rw_mbytes_per_sec": 0, 00:17:54.168 "r_mbytes_per_sec": 0, 00:17:54.168 "w_mbytes_per_sec": 0 00:17:54.168 }, 00:17:54.168 "claimed": false, 00:17:54.168 "zoned": false, 00:17:54.168 "supported_io_types": { 00:17:54.168 "read": true, 00:17:54.168 "write": true, 00:17:54.168 "unmap": true, 00:17:54.168 "flush": false, 00:17:54.168 "reset": true, 00:17:54.168 "nvme_admin": false, 00:17:54.168 "nvme_io": false, 00:17:54.168 "nvme_io_md": false, 00:17:54.168 "write_zeroes": true, 00:17:54.168 "zcopy": false, 00:17:54.168 "get_zone_info": false, 00:17:54.168 "zone_management": false, 00:17:54.168 "zone_append": false, 00:17:54.168 "compare": false, 00:17:54.168 "compare_and_write": false, 00:17:54.168 "abort": false, 00:17:54.168 "seek_hole": true, 00:17:54.168 "seek_data": true, 00:17:54.168 "copy": false, 00:17:54.168 "nvme_iov_md": false 00:17:54.168 }, 00:17:54.168 "driver_specific": { 00:17:54.168 "lvol": { 00:17:54.168 "lvol_store_uuid": "7eea98e2-bc59-433f-896e-ef12518279d4", 00:17:54.168 "base_bdev": "nvme0n1", 00:17:54.168 "thin_provision": true, 00:17:54.168 "num_allocated_clusters": 0, 00:17:54.168 "snapshot": false, 00:17:54.168 "clone": false, 00:17:54.168 "esnap_clone": false 00:17:54.168 } 00:17:54.168 } 00:17:54.168 } 00:17:54.168 ]' 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.168 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:54.427 22:14:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ceab4d3b-8585-4b2c-b265-de78306bb5a4 -c nvc0n1p0 --l2p_dram_limit 60 00:17:54.428 [2024-12-16 22:14:00.698736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.698776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.428 [2024-12-16 22:14:00.698787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.428 [2024-12-16 22:14:00.698795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.698870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.698879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.428 [2024-12-16 22:14:00.698887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:54.428 [2024-12-16 22:14:00.698895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.698933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.428 [2024-12-16 22:14:00.699161] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.428 [2024-12-16 22:14:00.699172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.699188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.428 [2024-12-16 22:14:00.699201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:54.428 [2024-12-16 22:14:00.699208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.699235] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7e147237-5d91-4b9c-b270-490efa126547 00:17:54.428 [2024-12-16 22:14:00.700262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.700371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:54.428 [2024-12-16 22:14:00.700385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:54.428 [2024-12-16 22:14:00.700392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.705701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.705734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.428 [2024-12-16 22:14:00.705743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.209 ms 00:17:54.428 [2024-12-16 22:14:00.705752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.705853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.705861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.428 [2024-12-16 22:14:00.705870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:54.428 [2024-12-16 22:14:00.705875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.705912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.705919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.428 [2024-12-16 22:14:00.705926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.428 [2024-12-16 22:14:00.705932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.705961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.428 [2024-12-16 22:14:00.707263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.707359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.428 [2024-12-16 22:14:00.707371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:17:54.428 [2024-12-16 22:14:00.707379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.707416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.707440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.428 [2024-12-16 22:14:00.707447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.428 [2024-12-16 22:14:00.707456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.707477] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:54.428 [2024-12-16 22:14:00.707601] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.428 [2024-12-16 22:14:00.707610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.428 [2024-12-16 22:14:00.707620] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.428 [2024-12-16 22:14:00.707628] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.428 [2024-12-16 22:14:00.707637] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.428 [2024-12-16 22:14:00.707643] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.428 [2024-12-16 22:14:00.707650] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.428 [2024-12-16 22:14:00.707656] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.428 [2024-12-16 22:14:00.707663] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.428 [2024-12-16 22:14:00.707669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.707676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.428 [2024-12-16 22:14:00.707682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:54.428 [2024-12-16 22:14:00.707689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.707767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.428 [2024-12-16 22:14:00.707776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.428 [2024-12-16 22:14:00.707791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:54.428 [2024-12-16 22:14:00.707797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.428 [2024-12-16 22:14:00.707901] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.428 [2024-12-16 22:14:00.707911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.428 [2024-12-16 22:14:00.707917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.428 [2024-12-16 22:14:00.707926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.428 [2024-12-16 22:14:00.707932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.428 [2024-12-16 22:14:00.707939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.428 [2024-12-16 22:14:00.707944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.428 [2024-12-16 22:14:00.707951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.428 [2024-12-16 22:14:00.707957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.428 [2024-12-16 22:14:00.707964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.428 [2024-12-16 22:14:00.707970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.428 [2024-12-16 22:14:00.707978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.428 [2024-12-16 22:14:00.707984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.428 [2024-12-16 22:14:00.707993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.428 [2024-12-16 22:14:00.708009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:54.428 [2024-12-16 22:14:00.708016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.428 [2024-12-16 22:14:00.708029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:54.428 [2024-12-16 22:14:00.708035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.428 [2024-12-16 22:14:00.708048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.428 [2024-12-16 22:14:00.708061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.428 [2024-12-16 22:14:00.708068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.428 [2024-12-16 22:14:00.708080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.428 [2024-12-16 22:14:00.708086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.428 [2024-12-16 22:14:00.708099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.428 [2024-12-16 22:14:00.708113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.428 [2024-12-16 22:14:00.708126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.428 [2024-12-16 22:14:00.708132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:54.428 [2024-12-16 22:14:00.708139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.428 [2024-12-16 22:14:00.708145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.428 [2024-12-16 22:14:00.708153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:54.428 [2024-12-16 22:14:00.708158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.429 [2024-12-16 22:14:00.708166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.429 [2024-12-16 22:14:00.708172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:54.429 [2024-12-16 22:14:00.708179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.429 [2024-12-16 22:14:00.708185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.429 [2024-12-16 22:14:00.708191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:54.429 [2024-12-16 22:14:00.708198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.429 [2024-12-16 22:14:00.708204] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.429 [2024-12-16 22:14:00.708211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.429 [2024-12-16 22:14:00.708220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.429 [2024-12-16 22:14:00.708228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.429 [2024-12-16 22:14:00.708236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.429 [2024-12-16 22:14:00.708242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.429 [2024-12-16 22:14:00.708249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.429 [2024-12-16 22:14:00.708255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.429 [2024-12-16 22:14:00.708262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.429 [2024-12-16 22:14:00.708268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.429 [2024-12-16 22:14:00.708278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.429 [2024-12-16 22:14:00.708286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.429 [2024-12-16 22:14:00.708301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:54.429 [2024-12-16 22:14:00.708310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:54.429 [2024-12-16 22:14:00.708317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:54.429 [2024-12-16 22:14:00.708324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:54.429 [2024-12-16 22:14:00.708331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:54.429 [2024-12-16 22:14:00.708341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:54.429 [2024-12-16 22:14:00.708346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:54.429 [2024-12-16 22:14:00.708353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:54.429 [2024-12-16 22:14:00.708359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:54.429 [2024-12-16 22:14:00.708391] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.429 [2024-12-16 22:14:00.708397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.429 [2024-12-16 22:14:00.708418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.429 [2024-12-16 22:14:00.708424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.429 [2024-12-16 22:14:00.708430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.429 [2024-12-16 22:14:00.708437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.429 [2024-12-16 22:14:00.708442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.429 [2024-12-16 22:14:00.708459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:17:54.429 [2024-12-16 22:14:00.708464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.429 [2024-12-16 22:14:00.708517] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:54.429 [2024-12-16 22:14:00.708525] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.330 [2024-12-16 22:14:02.525377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.330 [2024-12-16 22:14:02.525437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.330 [2024-12-16 22:14:02.525454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1816.847 ms 00:17:56.330 [2024-12-16 22:14:02.525463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.330 [2024-12-16 22:14:02.533621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.330 [2024-12-16 22:14:02.533791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.330 [2024-12-16 22:14:02.533854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.059 ms 00:17:56.330 [2024-12-16 22:14:02.533865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.330 [2024-12-16 22:14:02.534001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.330 [2024-12-16 22:14:02.534030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.330 [2024-12-16 22:14:02.534040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:56.330 [2024-12-16 22:14:02.534048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.330 [2024-12-16 22:14:02.551459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.330 [2024-12-16 22:14:02.551506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.330 [2024-12-16 22:14:02.551523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.346 ms 00:17:56.330 [2024-12-16 22:14:02.551532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.330 [2024-12-16 22:14:02.551579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.330 [2024-12-16 22:14:02.551589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.330 [2024-12-16 22:14:02.551601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.330 [2024-12-16 22:14:02.551610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.330 [2024-12-16 22:14:02.552013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.552030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.331 [2024-12-16 22:14:02.552043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:56.331 [2024-12-16 22:14:02.552054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.552236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.552265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.331 [2024-12-16 22:14:02.552283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:56.331 [2024-12-16 22:14:02.552297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.558192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.558337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.331 [2024-12-16 22:14:02.558362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:17:56.331 [2024-12-16 22:14:02.558376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.566749] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.331 [2024-12-16 22:14:02.580804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.580856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.331 [2024-12-16 22:14:02.580866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.321 ms 00:17:56.331 [2024-12-16 22:14:02.580875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.618242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.618285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.331 [2024-12-16 22:14:02.618296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.329 ms 00:17:56.331 [2024-12-16 22:14:02.618308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.618501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.618516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.331 [2024-12-16 22:14:02.618525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:56.331 [2024-12-16 22:14:02.618534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.621702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.621740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.331 [2024-12-16 22:14:02.621749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:17:56.331 [2024-12-16 22:14:02.621759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.624566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.624707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.331 [2024-12-16 22:14:02.624728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:17:56.331 [2024-12-16 22:14:02.624741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.625129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.625149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.331 [2024-12-16 22:14:02.625158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:56.331 [2024-12-16 22:14:02.625168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.648214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.648271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:56.331 [2024-12-16 22:14:02.648287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.018 ms 00:17:56.331 [2024-12-16 22:14:02.648299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.652447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.652494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:56.331 [2024-12-16 22:14:02.652508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:17:56.331 [2024-12-16 22:14:02.652521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.655380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.655510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:56.331 [2024-12-16 22:14:02.655522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:17:56.331 [2024-12-16 22:14:02.655530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.658298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.658330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.331 [2024-12-16 22:14:02.658338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:17:56.331 [2024-12-16 22:14:02.658347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.658388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.658397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.331 [2024-12-16 22:14:02.658404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:56.331 [2024-12-16 22:14:02.658411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.658464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.331 [2024-12-16 22:14:02.658475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.331 [2024-12-16 22:14:02.658481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:56.331 [2024-12-16 22:14:02.658487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.331 [2024-12-16 22:14:02.659607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 1960.556 ms, result 0 00:17:56.331 { 00:17:56.331 "name": "ftl0", 00:17:56.331 "uuid": "7e147237-5d91-4b9c-b270-490efa126547" 00:17:56.331 } 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.590 22:14:02 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:56.848 [ 00:17:56.848 { 00:17:56.848 "name": "ftl0", 00:17:56.848 "aliases": [ 00:17:56.848 "7e147237-5d91-4b9c-b270-490efa126547" 00:17:56.848 ], 00:17:56.848 "product_name": "FTL disk", 00:17:56.848 "block_size": 4096, 00:17:56.848 "num_blocks": 20971520, 00:17:56.848 "uuid": "7e147237-5d91-4b9c-b270-490efa126547", 00:17:56.848 "assigned_rate_limits": { 00:17:56.848 "rw_ios_per_sec": 0, 00:17:56.848 "rw_mbytes_per_sec": 0, 00:17:56.848 "r_mbytes_per_sec": 0, 00:17:56.848 "w_mbytes_per_sec": 0 00:17:56.848 }, 00:17:56.848 "claimed": false, 00:17:56.848 "zoned": false, 00:17:56.848 "supported_io_types": { 00:17:56.848 "read": true, 00:17:56.848 "write": true, 00:17:56.848 "unmap": true, 00:17:56.848 "flush": true, 00:17:56.848 "reset": false, 00:17:56.848 "nvme_admin": false, 00:17:56.848 "nvme_io": false, 00:17:56.848 "nvme_io_md": false, 00:17:56.848 "write_zeroes": true, 00:17:56.848 "zcopy": false, 00:17:56.848 "get_zone_info": false, 00:17:56.848 "zone_management": false, 00:17:56.848 "zone_append": false, 00:17:56.848 "compare": false, 00:17:56.848 "compare_and_write": false, 00:17:56.848 "abort": false, 00:17:56.848 "seek_hole": false, 00:17:56.848 "seek_data": false, 00:17:56.848 "copy": false, 00:17:56.848 "nvme_iov_md": false 00:17:56.848 }, 00:17:56.848 "driver_specific": { 00:17:56.848 "ftl": { 00:17:56.848 "base_bdev": "ceab4d3b-8585-4b2c-b265-de78306bb5a4", 00:17:56.848 "cache": "nvc0n1p0" 00:17:56.848 } 00:17:56.848 } 00:17:56.848 } 00:17:56.848 ] 00:17:56.848 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:56.848 22:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:56.848 22:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:57.106 22:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:57.106 22:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:57.106 [2024-12-16 22:14:03.441062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.441096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.107 [2024-12-16 22:14:03.441108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.107 [2024-12-16 22:14:03.441115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.441143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.107 [2024-12-16 22:14:03.441582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.441606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.107 [2024-12-16 22:14:03.441615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:57.107 [2024-12-16 22:14:03.441633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.442048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.442059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.107 [2024-12-16 22:14:03.442066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:17:57.107 [2024-12-16 22:14:03.442074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.444488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.444503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.107 [2024-12-16 22:14:03.444511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:17:57.107 [2024-12-16 22:14:03.444519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.449289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.449378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.107 [2024-12-16 22:14:03.449421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:17:57.107 [2024-12-16 22:14:03.449440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.450556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.450654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.107 [2024-12-16 22:14:03.450701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:17:57.107 [2024-12-16 22:14:03.450720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.453832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.453940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.107 [2024-12-16 22:14:03.453987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:17:57.107 [2024-12-16 22:14:03.454007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.107 [2024-12-16 22:14:03.454195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.107 [2024-12-16 22:14:03.454255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.107 [2024-12-16 22:14:03.454296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:57.107 [2024-12-16 22:14:03.454316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.366 [2024-12-16 22:14:03.455599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.366 [2024-12-16 22:14:03.455693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.366 [2024-12-16 22:14:03.455735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:17:57.366 [2024-12-16 22:14:03.455754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.366 [2024-12-16 22:14:03.456670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.366 [2024-12-16 22:14:03.456756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.367 [2024-12-16 22:14:03.456797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:17:57.367 [2024-12-16 22:14:03.456815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.367 [2024-12-16 22:14:03.457615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.367 [2024-12-16 22:14:03.457708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.367 [2024-12-16 22:14:03.457753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:17:57.367 [2024-12-16 22:14:03.457772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.367 [2024-12-16 22:14:03.458585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.367 [2024-12-16 22:14:03.458672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.367 [2024-12-16 22:14:03.458717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:17:57.367 [2024-12-16 22:14:03.458736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.367 [2024-12-16 22:14:03.458773] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.367 [2024-12-16 22:14:03.458892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.458934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.458959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.458981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.367 [2024-12-16 22:14:03.459767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.368 [2024-12-16 22:14:03.459953] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.368 [2024-12-16 22:14:03.459961] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7e147237-5d91-4b9c-b270-490efa126547 00:17:57.368 [2024-12-16 22:14:03.459969] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.368 [2024-12-16 22:14:03.459974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.368 [2024-12-16 22:14:03.459981] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.368 [2024-12-16 22:14:03.459986] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.368 [2024-12-16 22:14:03.459993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.368 [2024-12-16 22:14:03.459999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.368 [2024-12-16 22:14:03.460005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.368 [2024-12-16 22:14:03.460010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.368 [2024-12-16 22:14:03.460016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.368 [2024-12-16 22:14:03.460022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.368 [2024-12-16 22:14:03.460029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.368 [2024-12-16 22:14:03.460036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:17:57.368 [2024-12-16 22:14:03.460043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.461676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.368 [2024-12-16 22:14:03.461763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.368 [2024-12-16 22:14:03.461808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:17:57.368 [2024-12-16 22:14:03.461827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.461932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.368 [2024-12-16 22:14:03.461958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.368 [2024-12-16 22:14:03.462008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:57.368 [2024-12-16 22:14:03.462029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.466961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.467051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.368 [2024-12-16 22:14:03.467107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.467126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.467186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.467238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.368 [2024-12-16 22:14:03.467256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.467274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.467351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.467425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.368 [2024-12-16 22:14:03.467443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.467459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.467489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.467510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.368 [2024-12-16 22:14:03.467525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.467611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.476409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.476520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.368 [2024-12-16 22:14:03.476574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.476594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.483805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.483920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.368 [2024-12-16 22:14:03.483963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.483985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.484041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.484062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.368 [2024-12-16 22:14:03.484115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.484143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.484235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.484297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.368 [2024-12-16 22:14:03.484313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.484380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.484468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.484536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.368 [2024-12-16 22:14:03.484558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.484607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.484664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.484687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.368 [2024-12-16 22:14:03.484772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.484792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.484849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.484876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.368 [2024-12-16 22:14:03.484892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.484908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.485016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.368 [2024-12-16 22:14:03.485043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.368 [2024-12-16 22:14:03.485059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.368 [2024-12-16 22:14:03.485075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.368 [2024-12-16 22:14:03.485224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.133 ms, result 0 00:17:57.368 true 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88168 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 88168 ']' 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 88168 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88168 00:17:57.368 killing process with pid 88168 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88168' 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 88168 00:17:57.368 22:14:03 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 88168 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.653 22:14:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:02.653 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:02.653 fio-3.35 00:18:02.653 Starting 1 thread 00:18:07.923 00:18:07.923 test: (groupid=0, jobs=1): err= 0: pid=88322: Mon Dec 16 22:14:13 2024 00:18:07.923 read: IOPS=869, BW=57.7MiB/s (60.5MB/s)(255MiB/4410msec) 00:18:07.923 slat (nsec): min=3985, max=20121, avg=5440.29, stdev=1757.83 00:18:07.923 clat (usec): min=298, max=1205, avg=522.27, stdev=150.44 00:18:07.923 lat (usec): min=303, max=1211, avg=527.71, stdev=150.59 00:18:07.923 clat percentiles (usec): 00:18:07.923 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 351], 20.00th=[ 412], 00:18:07.923 | 30.00th=[ 420], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 545], 00:18:07.923 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 816], 95.00th=[ 840], 00:18:07.923 | 99.00th=[ 930], 99.50th=[ 1004], 99.90th=[ 1123], 99.95th=[ 1188], 00:18:07.923 | 99.99th=[ 1205] 00:18:07.923 write: IOPS=875, BW=58.1MiB/s (61.0MB/s)(256MiB/4405msec); 0 zone resets 00:18:07.923 slat (nsec): min=14465, max=82140, avg=19305.56, stdev=3338.11 00:18:07.923 clat (usec): min=310, max=1528, avg=588.95, stdev=158.40 00:18:07.923 lat (usec): min=331, max=1547, avg=608.26, stdev=158.28 00:18:07.923 clat percentiles (usec): 00:18:07.923 | 1.00th=[ 322], 5.00th=[ 363], 10.00th=[ 383], 20.00th=[ 498], 00:18:07.923 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 578], 00:18:07.923 | 70.00th=[ 594], 80.00th=[ 660], 90.00th=[ 857], 95.00th=[ 922], 00:18:07.923 | 99.00th=[ 996], 99.50th=[ 1074], 99.90th=[ 1254], 99.95th=[ 1418], 00:18:07.923 | 99.99th=[ 1532] 00:18:07.923 bw ( KiB/s): min=46784, max=67320, per=98.76%, avg=58786.00, stdev=7251.95, samples=8 00:18:07.923 iops : min= 688, max= 990, avg=864.50, stdev=106.65, samples=8 00:18:07.923 lat (usec) : 500=38.77%, 750=46.92%, 1000=13.55% 00:18:07.923 lat (msec) : 2=0.75% 00:18:07.923 cpu : usr=99.27%, sys=0.09%, ctx=11, majf=0, minf=1179 00:18:07.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.923 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.923 00:18:07.923 Run status group 0 (all jobs): 00:18:07.923 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=255MiB (267MB), run=4410-4410msec 00:18:07.923 WRITE: bw=58.1MiB/s (61.0MB/s), 58.1MiB/s-58.1MiB/s (61.0MB/s-61.0MB/s), io=256MiB (269MB), run=4405-4405msec 00:18:08.183 ----------------------------------------------------- 00:18:08.183 Suppressions used: 00:18:08.183 count bytes template 00:18:08.183 1 5 /usr/src/fio/parse.c 00:18:08.183 1 8 libtcmalloc_minimal.so 00:18:08.183 1 904 libcrypto.so 00:18:08.183 ----------------------------------------------------- 00:18:08.183 00:18:08.183 22:14:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:08.183 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.183 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.444 22:14:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:08.444 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.444 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.444 fio-3.35 00:18:08.444 Starting 2 threads 00:18:35.022 00:18:35.022 first_half: (groupid=0, jobs=1): err= 0: pid=88425: Mon Dec 16 22:14:38 2024 00:18:35.022 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(256MiB/22549msec) 00:18:35.022 slat (usec): min=3, max=108, avg= 4.10, stdev= 1.08 00:18:35.022 clat (usec): min=475, max=416319, avg=37017.65, stdev=27636.45 00:18:35.022 lat (usec): min=478, max=416324, avg=37021.75, stdev=27636.66 00:18:35.022 clat percentiles (msec): 00:18:35.022 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 30], 00:18:35.022 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:35.022 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 74], 00:18:35.022 | 99.00th=[ 171], 99.50th=[ 188], 99.90th=[ 347], 99.95th=[ 401], 00:18:35.022 | 99.99th=[ 414] 00:18:35.022 write: IOPS=2911, BW=11.4MiB/s (11.9MB/s)(256MiB/22512msec); 0 zone resets 00:18:35.022 slat (usec): min=3, max=2403, avg= 5.57, stdev=11.83 00:18:35.022 clat (usec): min=351, max=47470, avg=7038.17, stdev=7512.00 00:18:35.022 lat (usec): min=358, max=47476, avg=7043.75, stdev=7512.35 00:18:35.022 clat percentiles (usec): 00:18:35.022 | 1.00th=[ 766], 5.00th=[ 1020], 10.00th=[ 1254], 20.00th=[ 2343], 00:18:35.022 | 30.00th=[ 3294], 40.00th=[ 4080], 50.00th=[ 4817], 60.00th=[ 5407], 00:18:35.022 | 70.00th=[ 6194], 80.00th=[ 8586], 90.00th=[16712], 95.00th=[24773], 00:18:35.022 | 99.00th=[35914], 99.50th=[39584], 99.90th=[43779], 99.95th=[45351], 00:18:35.022 | 99.99th=[46924] 00:18:35.022 bw ( KiB/s): min= 704, max=60632, per=97.87%, avg=22794.52, stdev=17400.35, samples=23 00:18:35.022 iops : min= 176, max=15158, avg=5698.61, stdev=4350.11, samples=23 00:18:35.022 lat (usec) : 500=0.02%, 750=0.41%, 1000=1.87% 00:18:35.022 lat (msec) : 2=6.82%, 4=10.37%, 10=22.42%, 20=5.82%, 50=48.76% 00:18:35.022 lat (msec) : 100=1.61%, 250=1.83%, 500=0.07% 00:18:35.022 cpu : usr=99.30%, sys=0.12%, ctx=38, majf=0, minf=5549 00:18:35.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:35.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.022 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.022 issued rwts: total=65465,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.022 second_half: (groupid=0, jobs=1): err= 0: pid=88426: Mon Dec 16 22:14:38 2024 00:18:35.022 read: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(256MiB/22362msec) 00:18:35.022 slat (nsec): min=3122, max=45484, avg=5281.97, stdev=1136.63 00:18:35.022 clat (msec): min=9, max=311, avg=36.78, stdev=21.62 00:18:35.022 lat (msec): min=9, max=311, avg=36.79, stdev=21.62 00:18:35.022 clat percentiles (msec): 00:18:35.022 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:18:35.022 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:35.022 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 68], 00:18:35.022 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 209], 99.95th=[ 228], 00:18:35.022 | 99.99th=[ 271] 00:18:35.022 write: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(256MiB/22245msec); 0 zone resets 00:18:35.022 slat (usec): min=3, max=2040, avg= 6.80, stdev=10.11 00:18:35.022 clat (usec): min=360, max=36802, avg=6900.09, stdev=5208.62 00:18:35.022 lat (usec): min=368, max=37561, avg=6906.89, stdev=5209.03 00:18:35.022 clat percentiles (usec): 00:18:35.022 | 1.00th=[ 857], 5.00th=[ 1778], 10.00th=[ 2638], 20.00th=[ 3425], 00:18:35.022 | 30.00th=[ 4178], 40.00th=[ 4817], 50.00th=[ 5407], 60.00th=[ 5997], 00:18:35.023 | 70.00th=[ 6915], 80.00th=[ 9241], 90.00th=[13960], 95.00th=[18220], 00:18:35.023 | 99.00th=[27657], 99.50th=[30540], 99.90th=[32637], 99.95th=[33162], 00:18:35.023 | 99.99th=[34866] 00:18:35.023 bw ( KiB/s): min= 184, max=41760, per=100.00%, avg=24790.48, stdev=14438.17, samples=21 00:18:35.023 iops : min= 46, max=10440, avg=6197.62, stdev=3609.54, samples=21 00:18:35.023 lat (usec) : 500=0.02%, 750=0.23%, 1000=0.54% 00:18:35.023 lat (msec) : 2=2.22%, 4=10.98%, 10=27.19%, 20=7.04%, 50=48.30% 00:18:35.023 lat (msec) : 100=1.88%, 250=1.60%, 500=0.01% 00:18:35.023 cpu : usr=99.24%, sys=0.20%, ctx=40, majf=0, minf=5587 00:18:35.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:35.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.023 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.023 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.023 00:18:35.023 Run status group 0 (all jobs): 00:18:35.023 READ: bw=22.7MiB/s (23.8MB/s), 11.3MiB/s-11.4MiB/s (11.9MB/s-12.0MB/s), io=512MiB (536MB), run=22362-22549msec 00:18:35.023 WRITE: bw=22.7MiB/s (23.8MB/s), 11.4MiB/s-11.5MiB/s (11.9MB/s-12.1MB/s), io=512MiB (537MB), run=22245-22512msec 00:18:35.023 ----------------------------------------------------- 00:18:35.023 Suppressions used: 00:18:35.023 count bytes template 00:18:35.023 2 10 /usr/src/fio/parse.c 00:18:35.023 3 288 /usr/src/fio/iolog.c 00:18:35.023 1 8 libtcmalloc_minimal.so 00:18:35.023 1 904 libcrypto.so 00:18:35.023 ----------------------------------------------------- 00:18:35.023 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:35.023 22:14:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:35.023 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:35.023 fio-3.35 00:18:35.023 Starting 1 thread 00:18:49.930 00:18:49.930 test: (groupid=0, jobs=1): err= 0: pid=88713: Mon Dec 16 22:14:55 2024 00:18:49.930 read: IOPS=7021, BW=27.4MiB/s (28.8MB/s)(255MiB/9286msec) 00:18:49.930 slat (nsec): min=3109, max=24683, avg=4909.60, stdev=1188.54 00:18:49.930 clat (usec): min=541, max=40032, avg=18220.02, stdev=3040.58 00:18:49.930 lat (usec): min=545, max=40037, avg=18224.93, stdev=3040.55 00:18:49.930 clat percentiles (usec): 00:18:49.930 | 1.00th=[13566], 5.00th=[14222], 10.00th=[15270], 20.00th=[15795], 00:18:49.930 | 30.00th=[16057], 40.00th=[16712], 50.00th=[17695], 60.00th=[18482], 00:18:49.930 | 70.00th=[19530], 80.00th=[20579], 90.00th=[22152], 95.00th=[24249], 00:18:49.930 | 99.00th=[26870], 99.50th=[27657], 99.90th=[30016], 99.95th=[34866], 00:18:49.930 | 99.99th=[39060] 00:18:49.930 write: IOPS=11.5k, BW=44.9MiB/s (47.0MB/s)(256MiB/5706msec); 0 zone resets 00:18:49.930 slat (usec): min=4, max=1699, avg= 6.98, stdev= 7.21 00:18:49.930 clat (usec): min=543, max=47634, avg=11092.27, stdev=12301.66 00:18:49.930 lat (usec): min=548, max=47649, avg=11099.25, stdev=12301.75 00:18:49.930 clat percentiles (usec): 00:18:49.930 | 1.00th=[ 750], 5.00th=[ 914], 10.00th=[ 1057], 20.00th=[ 1254], 00:18:49.930 | 30.00th=[ 1434], 40.00th=[ 1795], 50.00th=[ 7111], 60.00th=[ 9765], 00:18:49.930 | 70.00th=[13960], 80.00th=[16581], 90.00th=[36439], 95.00th=[38536], 00:18:49.930 | 99.00th=[41157], 99.50th=[42730], 99.90th=[45876], 99.95th=[46400], 00:18:49.930 | 99.99th=[47449] 00:18:49.930 bw ( KiB/s): min=24216, max=69544, per=95.09%, avg=43685.17, stdev=11036.44, samples=12 00:18:49.930 iops : min= 6054, max=17386, avg=10921.25, stdev=2759.15, samples=12 00:18:49.930 lat (usec) : 750=0.48%, 1000=3.40% 00:18:49.930 lat (msec) : 2=16.53%, 4=0.65%, 10=9.30%, 20=48.82%, 50=20.82% 00:18:49.930 cpu : usr=99.14%, sys=0.16%, ctx=21, majf=0, minf=5575 00:18:49.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:49.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.930 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.930 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.930 00:18:49.930 Run status group 0 (all jobs): 00:18:49.931 READ: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s (28.8MB/s-28.8MB/s), io=255MiB (267MB), run=9286-9286msec 00:18:49.931 WRITE: bw=44.9MiB/s (47.0MB/s), 44.9MiB/s-44.9MiB/s (47.0MB/s-47.0MB/s), io=256MiB (268MB), run=5706-5706msec 00:18:50.504 ----------------------------------------------------- 00:18:50.504 Suppressions used: 00:18:50.504 count bytes template 00:18:50.504 1 5 /usr/src/fio/parse.c 00:18:50.504 2 192 /usr/src/fio/iolog.c 00:18:50.504 1 8 libtcmalloc_minimal.so 00:18:50.504 1 904 libcrypto.so 00:18:50.504 ----------------------------------------------------- 00:18:50.504 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:50.504 Remove shared memory files 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid71166 /dev/shm/spdk_tgt_trace.pid87114 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:50.504 ************************************ 00:18:50.504 END TEST ftl_fio_basic 00:18:50.504 ************************************ 00:18:50.504 00:18:50.504 real 0m59.681s 00:18:50.504 user 2m7.879s 00:18:50.504 sys 0m2.679s 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.504 22:14:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:50.504 22:14:56 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:50.504 22:14:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:50.504 22:14:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.504 22:14:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:50.504 ************************************ 00:18:50.504 START TEST ftl_bdevperf 00:18:50.504 ************************************ 00:18:50.504 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:50.766 * Looking for test storage... 00:18:50.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.766 --rc genhtml_branch_coverage=1 00:18:50.766 --rc genhtml_function_coverage=1 00:18:50.766 --rc genhtml_legend=1 00:18:50.766 --rc geninfo_all_blocks=1 00:18:50.766 --rc geninfo_unexecuted_blocks=1 00:18:50.766 00:18:50.766 ' 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.766 --rc genhtml_branch_coverage=1 00:18:50.766 --rc genhtml_function_coverage=1 00:18:50.766 --rc genhtml_legend=1 00:18:50.766 --rc geninfo_all_blocks=1 00:18:50.766 --rc geninfo_unexecuted_blocks=1 00:18:50.766 00:18:50.766 ' 00:18:50.766 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:50.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.766 --rc genhtml_branch_coverage=1 00:18:50.766 --rc genhtml_function_coverage=1 00:18:50.766 --rc genhtml_legend=1 00:18:50.766 --rc geninfo_all_blocks=1 00:18:50.766 --rc geninfo_unexecuted_blocks=1 00:18:50.767 00:18:50.767 ' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:50.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.767 --rc genhtml_branch_coverage=1 00:18:50.767 --rc genhtml_function_coverage=1 00:18:50.767 --rc genhtml_legend=1 00:18:50.767 --rc geninfo_all_blocks=1 00:18:50.767 --rc geninfo_unexecuted_blocks=1 00:18:50.767 00:18:50.767 ' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88962 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88962 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88962 ']' 00:18:50.767 22:14:56 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.767 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.767 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.767 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.767 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:50.767 [2024-12-16 22:14:57.071784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:50.767 [2024-12-16 22:14:57.072172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88962 ] 00:18:51.029 [2024-12-16 22:14:57.236291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.029 [2024-12-16 22:14:57.278060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:51.602 22:14:57 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:52.175 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:52.176 { 00:18:52.176 "name": "nvme0n1", 00:18:52.176 "aliases": [ 00:18:52.176 "ed23f0c9-050d-4c3d-8cd9-4e21b0aaf8b6" 00:18:52.176 ], 00:18:52.176 "product_name": "NVMe disk", 00:18:52.176 "block_size": 4096, 00:18:52.176 "num_blocks": 1310720, 00:18:52.176 "uuid": "ed23f0c9-050d-4c3d-8cd9-4e21b0aaf8b6", 00:18:52.176 "numa_id": -1, 00:18:52.176 "assigned_rate_limits": { 00:18:52.176 "rw_ios_per_sec": 0, 00:18:52.176 "rw_mbytes_per_sec": 0, 00:18:52.176 "r_mbytes_per_sec": 0, 00:18:52.176 "w_mbytes_per_sec": 0 00:18:52.176 }, 00:18:52.176 "claimed": true, 00:18:52.176 "claim_type": "read_many_write_one", 00:18:52.176 "zoned": false, 00:18:52.176 "supported_io_types": { 00:18:52.176 "read": true, 00:18:52.176 "write": true, 00:18:52.176 "unmap": true, 00:18:52.176 "flush": true, 00:18:52.176 "reset": true, 00:18:52.176 "nvme_admin": true, 00:18:52.176 "nvme_io": true, 00:18:52.176 "nvme_io_md": false, 00:18:52.176 "write_zeroes": true, 00:18:52.176 "zcopy": false, 00:18:52.176 "get_zone_info": false, 00:18:52.176 "zone_management": false, 00:18:52.176 "zone_append": false, 00:18:52.176 "compare": true, 00:18:52.176 "compare_and_write": false, 00:18:52.176 "abort": true, 00:18:52.176 "seek_hole": false, 00:18:52.176 "seek_data": false, 00:18:52.176 "copy": true, 00:18:52.176 "nvme_iov_md": false 00:18:52.176 }, 00:18:52.176 "driver_specific": { 00:18:52.176 "nvme": [ 00:18:52.176 { 00:18:52.176 "pci_address": "0000:00:11.0", 00:18:52.176 "trid": { 00:18:52.176 "trtype": "PCIe", 00:18:52.176 "traddr": "0000:00:11.0" 00:18:52.176 }, 00:18:52.176 "ctrlr_data": { 00:18:52.176 "cntlid": 0, 00:18:52.176 "vendor_id": "0x1b36", 00:18:52.176 "model_number": "QEMU NVMe Ctrl", 00:18:52.176 "serial_number": "12341", 00:18:52.176 "firmware_revision": "8.0.0", 00:18:52.176 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:52.176 "oacs": { 00:18:52.176 "security": 0, 00:18:52.176 "format": 1, 00:18:52.176 "firmware": 0, 00:18:52.176 "ns_manage": 1 00:18:52.176 }, 00:18:52.176 "multi_ctrlr": false, 00:18:52.176 "ana_reporting": false 00:18:52.176 }, 00:18:52.176 "vs": { 00:18:52.176 "nvme_version": "1.4" 00:18:52.176 }, 00:18:52.176 "ns_data": { 00:18:52.176 "id": 1, 00:18:52.176 "can_share": false 00:18:52.176 } 00:18:52.176 } 00:18:52.176 ], 00:18:52.176 "mp_policy": "active_passive" 00:18:52.176 } 00:18:52.176 } 00:18:52.176 ]' 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:52.176 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=7eea98e2-bc59-433f-896e-ef12518279d4 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:52.459 22:14:58 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7eea98e2-bc59-433f-896e-ef12518279d4 00:18:52.727 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:52.988 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=8ff40592-780a-4797-bd3b-92fbd33a7428 00:18:52.988 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8ff40592-780a-4797-bd3b-92fbd33a7428 00:18:53.249 22:14:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.249 22:14:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:53.250 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:53.510 { 00:18:53.510 "name": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:53.510 "aliases": [ 00:18:53.510 "lvs/nvme0n1p0" 00:18:53.510 ], 00:18:53.510 "product_name": "Logical Volume", 00:18:53.510 "block_size": 4096, 00:18:53.510 "num_blocks": 26476544, 00:18:53.510 "uuid": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:53.510 "assigned_rate_limits": { 00:18:53.510 "rw_ios_per_sec": 0, 00:18:53.510 "rw_mbytes_per_sec": 0, 00:18:53.510 "r_mbytes_per_sec": 0, 00:18:53.510 "w_mbytes_per_sec": 0 00:18:53.510 }, 00:18:53.510 "claimed": false, 00:18:53.510 "zoned": false, 00:18:53.510 "supported_io_types": { 00:18:53.510 "read": true, 00:18:53.510 "write": true, 00:18:53.510 "unmap": true, 00:18:53.510 "flush": false, 00:18:53.510 "reset": true, 00:18:53.510 "nvme_admin": false, 00:18:53.510 "nvme_io": false, 00:18:53.510 "nvme_io_md": false, 00:18:53.510 "write_zeroes": true, 00:18:53.510 "zcopy": false, 00:18:53.510 "get_zone_info": false, 00:18:53.510 "zone_management": false, 00:18:53.510 "zone_append": false, 00:18:53.510 "compare": false, 00:18:53.510 "compare_and_write": false, 00:18:53.510 "abort": false, 00:18:53.510 "seek_hole": true, 00:18:53.510 "seek_data": true, 00:18:53.510 "copy": false, 00:18:53.510 "nvme_iov_md": false 00:18:53.510 }, 00:18:53.510 "driver_specific": { 00:18:53.510 "lvol": { 00:18:53.510 "lvol_store_uuid": "8ff40592-780a-4797-bd3b-92fbd33a7428", 00:18:53.510 "base_bdev": "nvme0n1", 00:18:53.510 "thin_provision": true, 00:18:53.510 "num_allocated_clusters": 0, 00:18:53.510 "snapshot": false, 00:18:53.510 "clone": false, 00:18:53.510 "esnap_clone": false 00:18:53.510 } 00:18:53.510 } 00:18:53.510 } 00:18:53.510 ]' 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:53.510 22:14:59 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:53.768 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.027 { 00:18:54.027 "name": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:54.027 "aliases": [ 00:18:54.027 "lvs/nvme0n1p0" 00:18:54.027 ], 00:18:54.027 "product_name": "Logical Volume", 00:18:54.027 "block_size": 4096, 00:18:54.027 "num_blocks": 26476544, 00:18:54.027 "uuid": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:54.027 "assigned_rate_limits": { 00:18:54.027 "rw_ios_per_sec": 0, 00:18:54.027 "rw_mbytes_per_sec": 0, 00:18:54.027 "r_mbytes_per_sec": 0, 00:18:54.027 "w_mbytes_per_sec": 0 00:18:54.027 }, 00:18:54.027 "claimed": false, 00:18:54.027 "zoned": false, 00:18:54.027 "supported_io_types": { 00:18:54.027 "read": true, 00:18:54.027 "write": true, 00:18:54.027 "unmap": true, 00:18:54.027 "flush": false, 00:18:54.027 "reset": true, 00:18:54.027 "nvme_admin": false, 00:18:54.027 "nvme_io": false, 00:18:54.027 "nvme_io_md": false, 00:18:54.027 "write_zeroes": true, 00:18:54.027 "zcopy": false, 00:18:54.027 "get_zone_info": false, 00:18:54.027 "zone_management": false, 00:18:54.027 "zone_append": false, 00:18:54.027 "compare": false, 00:18:54.027 "compare_and_write": false, 00:18:54.027 "abort": false, 00:18:54.027 "seek_hole": true, 00:18:54.027 "seek_data": true, 00:18:54.027 "copy": false, 00:18:54.027 "nvme_iov_md": false 00:18:54.027 }, 00:18:54.027 "driver_specific": { 00:18:54.027 "lvol": { 00:18:54.027 "lvol_store_uuid": "8ff40592-780a-4797-bd3b-92fbd33a7428", 00:18:54.027 "base_bdev": "nvme0n1", 00:18:54.027 "thin_provision": true, 00:18:54.027 "num_allocated_clusters": 0, 00:18:54.027 "snapshot": false, 00:18:54.027 "clone": false, 00:18:54.027 "esnap_clone": false 00:18:54.027 } 00:18:54.027 } 00:18:54.027 } 00:18:54.027 ]' 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:54.027 22:15:00 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:54.285 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 990d7a50-3c89-43db-b781-7e18ed5b07eb 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:54.544 { 00:18:54.544 "name": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:54.544 "aliases": [ 00:18:54.544 "lvs/nvme0n1p0" 00:18:54.544 ], 00:18:54.544 "product_name": "Logical Volume", 00:18:54.544 "block_size": 4096, 00:18:54.544 "num_blocks": 26476544, 00:18:54.544 "uuid": "990d7a50-3c89-43db-b781-7e18ed5b07eb", 00:18:54.544 "assigned_rate_limits": { 00:18:54.544 "rw_ios_per_sec": 0, 00:18:54.544 "rw_mbytes_per_sec": 0, 00:18:54.544 "r_mbytes_per_sec": 0, 00:18:54.544 "w_mbytes_per_sec": 0 00:18:54.544 }, 00:18:54.544 "claimed": false, 00:18:54.544 "zoned": false, 00:18:54.544 "supported_io_types": { 00:18:54.544 "read": true, 00:18:54.544 "write": true, 00:18:54.544 "unmap": true, 00:18:54.544 "flush": false, 00:18:54.544 "reset": true, 00:18:54.544 "nvme_admin": false, 00:18:54.544 "nvme_io": false, 00:18:54.544 "nvme_io_md": false, 00:18:54.544 "write_zeroes": true, 00:18:54.544 "zcopy": false, 00:18:54.544 "get_zone_info": false, 00:18:54.544 "zone_management": false, 00:18:54.544 "zone_append": false, 00:18:54.544 "compare": false, 00:18:54.544 "compare_and_write": false, 00:18:54.544 "abort": false, 00:18:54.544 "seek_hole": true, 00:18:54.544 "seek_data": true, 00:18:54.544 "copy": false, 00:18:54.544 "nvme_iov_md": false 00:18:54.544 }, 00:18:54.544 "driver_specific": { 00:18:54.544 "lvol": { 00:18:54.544 "lvol_store_uuid": "8ff40592-780a-4797-bd3b-92fbd33a7428", 00:18:54.544 "base_bdev": "nvme0n1", 00:18:54.544 "thin_provision": true, 00:18:54.544 "num_allocated_clusters": 0, 00:18:54.544 "snapshot": false, 00:18:54.544 "clone": false, 00:18:54.544 "esnap_clone": false 00:18:54.544 } 00:18:54.544 } 00:18:54.544 } 00:18:54.544 ]' 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:54.544 22:15:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 990d7a50-3c89-43db-b781-7e18ed5b07eb -c nvc0n1p0 --l2p_dram_limit 20 00:18:54.805 [2024-12-16 22:15:00.949023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.949178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.805 [2024-12-16 22:15:00.949198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.805 [2024-12-16 22:15:00.949208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.949260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.949269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.805 [2024-12-16 22:15:00.949279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:54.805 [2024-12-16 22:15:00.949288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.949305] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.805 [2024-12-16 22:15:00.949501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.805 [2024-12-16 22:15:00.949516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.949527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.805 [2024-12-16 22:15:00.949536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:18:54.805 [2024-12-16 22:15:00.949543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.949566] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6048a178-cae4-4a52-adb2-c40e7efe543e 00:18:54.805 [2024-12-16 22:15:00.950879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.950904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:54.805 [2024-12-16 22:15:00.950912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:54.805 [2024-12-16 22:15:00.950927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.957812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.957856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.805 [2024-12-16 22:15:00.957864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.851 ms 00:18:54.805 [2024-12-16 22:15:00.957877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.957968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.957979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.805 [2024-12-16 22:15:00.957990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:54.805 [2024-12-16 22:15:00.957998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.958035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.958045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.805 [2024-12-16 22:15:00.958051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:54.805 [2024-12-16 22:15:00.958059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.958075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.805 [2024-12-16 22:15:00.959710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.959738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.805 [2024-12-16 22:15:00.959748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.637 ms 00:18:54.805 [2024-12-16 22:15:00.959753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.959782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.805 [2024-12-16 22:15:00.959789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.805 [2024-12-16 22:15:00.959799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:54.805 [2024-12-16 22:15:00.959805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.805 [2024-12-16 22:15:00.959818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:54.805 [2024-12-16 22:15:00.959955] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.805 [2024-12-16 22:15:00.959968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.805 [2024-12-16 22:15:00.959988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.806 [2024-12-16 22:15:00.960000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960007] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:54.806 [2024-12-16 22:15:00.960021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.806 [2024-12-16 22:15:00.960030] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.806 [2024-12-16 22:15:00.960037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.806 [2024-12-16 22:15:00.960045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.806 [2024-12-16 22:15:00.960051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.806 [2024-12-16 22:15:00.960062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:18:54.806 [2024-12-16 22:15:00.960068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.806 [2024-12-16 22:15:00.960136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.806 [2024-12-16 22:15:00.960144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.806 [2024-12-16 22:15:00.960154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:54.806 [2024-12-16 22:15:00.960160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.806 [2024-12-16 22:15:00.960238] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.806 [2024-12-16 22:15:00.960252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.806 [2024-12-16 22:15:00.960260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.806 [2024-12-16 22:15:00.960279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.806 [2024-12-16 22:15:00.960299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.806 [2024-12-16 22:15:00.960312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.806 [2024-12-16 22:15:00.960317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:54.806 [2024-12-16 22:15:00.960326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.806 [2024-12-16 22:15:00.960332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.806 [2024-12-16 22:15:00.960338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:54.806 [2024-12-16 22:15:00.960343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.806 [2024-12-16 22:15:00.960355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.806 [2024-12-16 22:15:00.960377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.806 [2024-12-16 22:15:00.960398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.806 [2024-12-16 22:15:00.960427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.806 [2024-12-16 22:15:00.960448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.806 [2024-12-16 22:15:00.960469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.806 [2024-12-16 22:15:00.960483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.806 [2024-12-16 22:15:00.960489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:54.806 [2024-12-16 22:15:00.960496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.806 [2024-12-16 22:15:00.960502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.806 [2024-12-16 22:15:00.960510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:54.806 [2024-12-16 22:15:00.960517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.806 [2024-12-16 22:15:00.960530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:54.806 [2024-12-16 22:15:00.960537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960543] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.806 [2024-12-16 22:15:00.960553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.806 [2024-12-16 22:15:00.960559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.806 [2024-12-16 22:15:00.960580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.806 [2024-12-16 22:15:00.960588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.806 [2024-12-16 22:15:00.960594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.806 [2024-12-16 22:15:00.960601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.806 [2024-12-16 22:15:00.960607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.806 [2024-12-16 22:15:00.960614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.806 [2024-12-16 22:15:00.960622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.806 [2024-12-16 22:15:00.960631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:54.806 [2024-12-16 22:15:00.960648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:54.806 [2024-12-16 22:15:00.960655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:54.806 [2024-12-16 22:15:00.960663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:54.806 [2024-12-16 22:15:00.960669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:54.806 [2024-12-16 22:15:00.960678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:54.806 [2024-12-16 22:15:00.960685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:54.806 [2024-12-16 22:15:00.960694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:54.806 [2024-12-16 22:15:00.960700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:54.806 [2024-12-16 22:15:00.960708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:54.806 [2024-12-16 22:15:00.960742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.806 [2024-12-16 22:15:00.960752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.806 [2024-12-16 22:15:00.960766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.806 [2024-12-16 22:15:00.960772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.806 [2024-12-16 22:15:00.960779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.806 [2024-12-16 22:15:00.960785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.806 [2024-12-16 22:15:00.960794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.806 [2024-12-16 22:15:00.960802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:18:54.806 [2024-12-16 22:15:00.960810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.807 [2024-12-16 22:15:00.960846] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:54.807 [2024-12-16 22:15:00.960856] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:59.010 [2024-12-16 22:15:04.944190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.010 [2024-12-16 22:15:04.944309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:59.010 [2024-12-16 22:15:04.944335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3983.323 ms 00:18:59.010 [2024-12-16 22:15:04.944347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.010 [2024-12-16 22:15:04.964531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.010 [2024-12-16 22:15:04.964614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.010 [2024-12-16 22:15:04.964634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.046 ms 00:18:59.010 [2024-12-16 22:15:04.964650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.010 [2024-12-16 22:15:04.964801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.010 [2024-12-16 22:15:04.964817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.010 [2024-12-16 22:15:04.964832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:59.010 [2024-12-16 22:15:04.964869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.010 [2024-12-16 22:15:04.999091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.010 [2024-12-16 22:15:04.999165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.010 [2024-12-16 22:15:04.999182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.182 ms 00:18:59.010 [2024-12-16 22:15:04.999196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.010 [2024-12-16 22:15:04.999242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.010 [2024-12-16 22:15:04.999260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.010 [2024-12-16 22:15:04.999272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.011 [2024-12-16 22:15:04.999290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.000100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.000148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.011 [2024-12-16 22:15:05.000162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:18:59.011 [2024-12-16 22:15:05.000178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.000321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.000351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.011 [2024-12-16 22:15:05.000369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:59.011 [2024-12-16 22:15:05.000382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.011812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.011894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.011 [2024-12-16 22:15:05.011909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.406 ms 00:18:59.011 [2024-12-16 22:15:05.011921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.023321] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:59.011 [2024-12-16 22:15:05.032803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.032869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.011 [2024-12-16 22:15:05.032886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.785 ms 00:18:59.011 [2024-12-16 22:15:05.032895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.130804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.131169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:59.011 [2024-12-16 22:15:05.131205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.869 ms 00:18:59.011 [2024-12-16 22:15:05.131220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.131578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.131611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.011 [2024-12-16 22:15:05.131628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:18:59.011 [2024-12-16 22:15:05.131637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.138923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.138979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:59.011 [2024-12-16 22:15:05.138994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:18:59.011 [2024-12-16 22:15:05.139010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.144977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.145028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:59.011 [2024-12-16 22:15:05.145044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.903 ms 00:18:59.011 [2024-12-16 22:15:05.145052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.145417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.145435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.011 [2024-12-16 22:15:05.145450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:18:59.011 [2024-12-16 22:15:05.145458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.198586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.198643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:59.011 [2024-12-16 22:15:05.198660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.080 ms 00:18:59.011 [2024-12-16 22:15:05.198670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.207457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.207520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:59.011 [2024-12-16 22:15:05.207536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.697 ms 00:18:59.011 [2024-12-16 22:15:05.207545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.214351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.214404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:59.011 [2024-12-16 22:15:05.214419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.748 ms 00:18:59.011 [2024-12-16 22:15:05.214427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.221593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.221858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.011 [2024-12-16 22:15:05.221889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:18:59.011 [2024-12-16 22:15:05.221897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.221979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.221996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.011 [2024-12-16 22:15:05.222010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:59.011 [2024-12-16 22:15:05.222020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.222106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.011 [2024-12-16 22:15:05.222117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.011 [2024-12-16 22:15:05.222129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:59.011 [2024-12-16 22:15:05.222139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.011 [2024-12-16 22:15:05.223565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4273.916 ms, result 0 00:18:59.011 { 00:18:59.011 "name": "ftl0", 00:18:59.011 "uuid": "6048a178-cae4-4a52-adb2-c40e7efe543e" 00:18:59.011 } 00:18:59.011 22:15:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:59.011 22:15:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:59.011 22:15:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:59.272 22:15:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:59.272 [2024-12-16 22:15:05.565375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:59.272 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:59.272 Zero copy mechanism will not be used. 00:18:59.272 Running I/O for 4 seconds... 00:19:01.586 700.00 IOPS, 46.48 MiB/s [2024-12-16T22:15:08.868Z] 733.50 IOPS, 48.71 MiB/s [2024-12-16T22:15:09.809Z] 746.00 IOPS, 49.54 MiB/s [2024-12-16T22:15:09.809Z] 750.00 IOPS, 49.80 MiB/s 00:19:03.462 Latency(us) 00:19:03.462 [2024-12-16T22:15:09.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.462 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:03.462 ftl0 : 4.00 749.88 49.80 0.00 0.00 1405.42 389.12 3302.01 00:19:03.462 [2024-12-16T22:15:09.809Z] =================================================================================================================== 00:19:03.462 [2024-12-16T22:15:09.809Z] Total : 749.88 49.80 0.00 0.00 1405.42 389.12 3302.01 00:19:03.462 [2024-12-16 22:15:09.574318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:03.462 { 00:19:03.462 "results": [ 00:19:03.462 { 00:19:03.462 "job": "ftl0", 00:19:03.462 "core_mask": "0x1", 00:19:03.462 "workload": "randwrite", 00:19:03.462 "status": "finished", 00:19:03.462 "queue_depth": 1, 00:19:03.462 "io_size": 69632, 00:19:03.462 "runtime": 4.001947, 00:19:03.462 "iops": 749.8849934794239, 00:19:03.462 "mibps": 49.79705034824299, 00:19:03.462 "io_failed": 0, 00:19:03.462 "io_timeout": 0, 00:19:03.462 "avg_latency_us": 1405.4202486350705, 00:19:03.462 "min_latency_us": 389.12, 00:19:03.462 "max_latency_us": 3302.0061538461537 00:19:03.462 } 00:19:03.462 ], 00:19:03.462 "core_count": 1 00:19:03.462 } 00:19:03.462 22:15:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:03.462 [2024-12-16 22:15:09.676707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:03.462 Running I/O for 4 seconds... 00:19:05.346 6182.00 IOPS, 24.15 MiB/s [2024-12-16T22:15:13.079Z] 5726.50 IOPS, 22.37 MiB/s [2024-12-16T22:15:14.022Z] 5487.33 IOPS, 21.43 MiB/s [2024-12-16T22:15:14.022Z] 5318.50 IOPS, 20.78 MiB/s 00:19:07.675 Latency(us) 00:19:07.675 [2024-12-16T22:15:14.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.676 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:07.676 ftl0 : 4.04 5299.25 20.70 0.00 0.00 24043.02 395.42 45371.08 00:19:07.676 [2024-12-16T22:15:14.023Z] =================================================================================================================== 00:19:07.676 [2024-12-16T22:15:14.023Z] Total : 5299.25 20.70 0.00 0.00 24043.02 0.00 45371.08 00:19:07.676 { 00:19:07.676 "results": [ 00:19:07.676 { 00:19:07.676 "job": "ftl0", 00:19:07.676 "core_mask": "0x1", 00:19:07.676 "workload": "randwrite", 00:19:07.676 "status": "finished", 00:19:07.676 "queue_depth": 128, 00:19:07.676 "io_size": 4096, 00:19:07.676 "runtime": 4.0368, 00:19:07.676 "iops": 5299.246928260008, 00:19:07.676 "mibps": 20.700183313515655, 00:19:07.676 "io_failed": 0, 00:19:07.676 "io_timeout": 0, 00:19:07.676 "avg_latency_us": 24043.01800816984, 00:19:07.676 "min_latency_us": 395.4215384615385, 00:19:07.676 "max_latency_us": 45371.07692307692 00:19:07.676 } 00:19:07.676 ], 00:19:07.676 "core_count": 1 00:19:07.676 } 00:19:07.676 [2024-12-16 22:15:13.720431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:07.676 22:15:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:07.676 [2024-12-16 22:15:13.834474] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:07.676 Running I/O for 4 seconds... 00:19:09.565 4363.00 IOPS, 17.04 MiB/s [2024-12-16T22:15:16.853Z] 4311.00 IOPS, 16.84 MiB/s [2024-12-16T22:15:18.234Z] 4328.67 IOPS, 16.91 MiB/s [2024-12-16T22:15:18.234Z] 4434.25 IOPS, 17.32 MiB/s 00:19:11.887 Latency(us) 00:19:11.887 [2024-12-16T22:15:18.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.887 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:11.887 Verification LBA range: start 0x0 length 0x1400000 00:19:11.887 ftl0 : 4.01 4450.12 17.38 0.00 0.00 28688.66 412.75 44766.13 00:19:11.887 [2024-12-16T22:15:18.234Z] =================================================================================================================== 00:19:11.887 [2024-12-16T22:15:18.234Z] Total : 4450.12 17.38 0.00 0.00 28688.66 0.00 44766.13 00:19:11.887 [2024-12-16 22:15:17.854164] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:11.887 { 00:19:11.887 "results": [ 00:19:11.887 { 00:19:11.887 "job": "ftl0", 00:19:11.887 "core_mask": "0x1", 00:19:11.887 "workload": "verify", 00:19:11.887 "status": "finished", 00:19:11.887 "verify_range": { 00:19:11.887 "start": 0, 00:19:11.887 "length": 20971520 00:19:11.887 }, 00:19:11.887 "queue_depth": 128, 00:19:11.887 "io_size": 4096, 00:19:11.887 "runtime": 4.011576, 00:19:11.887 "iops": 4450.121348816525, 00:19:11.887 "mibps": 17.38328651881455, 00:19:11.887 "io_failed": 0, 00:19:11.887 "io_timeout": 0, 00:19:11.887 "avg_latency_us": 28688.66234854099, 00:19:11.887 "min_latency_us": 412.7507692307692, 00:19:11.887 "max_latency_us": 44766.12923076923 00:19:11.887 } 00:19:11.887 ], 00:19:11.887 "core_count": 1 00:19:11.887 } 00:19:11.887 22:15:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:11.887 [2024-12-16 22:15:18.058921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.887 [2024-12-16 22:15:18.058961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:11.887 [2024-12-16 22:15:18.058974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:11.887 [2024-12-16 22:15:18.058983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.888 [2024-12-16 22:15:18.059014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.888 [2024-12-16 22:15:18.059436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.888 [2024-12-16 22:15:18.059462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:11.888 [2024-12-16 22:15:18.059471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:19:11.888 [2024-12-16 22:15:18.059480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.888 [2024-12-16 22:15:18.062172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.888 [2024-12-16 22:15:18.062208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:11.888 [2024-12-16 22:15:18.062218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:19:11.888 [2024-12-16 22:15:18.062229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.265254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.265298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.150 [2024-12-16 22:15:18.265315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 203.008 ms 00:19:12.150 [2024-12-16 22:15:18.265325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.271558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.271589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:12.150 [2024-12-16 22:15:18.271599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:19:12.150 [2024-12-16 22:15:18.271609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.274000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.274042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.150 [2024-12-16 22:15:18.274051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.326 ms 00:19:12.150 [2024-12-16 22:15:18.274060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.278946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.279080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.150 [2024-12-16 22:15:18.279097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:19:12.150 [2024-12-16 22:15:18.279113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.279253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.279265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.150 [2024-12-16 22:15:18.279274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:12.150 [2024-12-16 22:15:18.279283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.281777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.281814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:12.150 [2024-12-16 22:15:18.281823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:19:12.150 [2024-12-16 22:15:18.281833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.284058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.284093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:12.150 [2024-12-16 22:15:18.284103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:19:12.150 [2024-12-16 22:15:18.284112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.285789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.285824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.150 [2024-12-16 22:15:18.285834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:19:12.150 [2024-12-16 22:15:18.285862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.287697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.150 [2024-12-16 22:15:18.287734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.150 [2024-12-16 22:15:18.287744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:19:12.150 [2024-12-16 22:15:18.287752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.150 [2024-12-16 22:15:18.287780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.150 [2024-12-16 22:15:18.287798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.150 [2024-12-16 22:15:18.287993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.151 [2024-12-16 22:15:18.288665] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.151 [2024-12-16 22:15:18.288673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6048a178-cae4-4a52-adb2-c40e7efe543e 00:19:12.151 [2024-12-16 22:15:18.288682] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.151 [2024-12-16 22:15:18.288689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.151 [2024-12-16 22:15:18.288697] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.151 [2024-12-16 22:15:18.288704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.151 [2024-12-16 22:15:18.288714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.151 [2024-12-16 22:15:18.288721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.151 [2024-12-16 22:15:18.288729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.151 [2024-12-16 22:15:18.288735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.151 [2024-12-16 22:15:18.288742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.151 [2024-12-16 22:15:18.288749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.151 [2024-12-16 22:15:18.288760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.151 [2024-12-16 22:15:18.288770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:19:12.151 [2024-12-16 22:15:18.288779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.290468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.152 [2024-12-16 22:15:18.290519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.152 [2024-12-16 22:15:18.290541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:19:12.152 [2024-12-16 22:15:18.290563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.290661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.152 [2024-12-16 22:15:18.290691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.152 [2024-12-16 22:15:18.290711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:12.152 [2024-12-16 22:15:18.290733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.295787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.295916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.152 [2024-12-16 22:15:18.295971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.295999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.296061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.296087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.152 [2024-12-16 22:15:18.296106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.296125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.296198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.296225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.152 [2024-12-16 22:15:18.296245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.296311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.296343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.296366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.152 [2024-12-16 22:15:18.296387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.296409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.305018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.305161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.152 [2024-12-16 22:15:18.305210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.305233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.312799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.312953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.152 [2024-12-16 22:15:18.313007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.152 [2024-12-16 22:15:18.313129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.152 [2024-12-16 22:15:18.313252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.152 [2024-12-16 22:15:18.313450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.152 [2024-12-16 22:15:18.313564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.152 [2024-12-16 22:15:18.313758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.313831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.152 [2024-12-16 22:15:18.313936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.152 [2024-12-16 22:15:18.313962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.152 [2024-12-16 22:15:18.313987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.152 [2024-12-16 22:15:18.314162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 255.207 ms, result 0 00:19:12.152 true 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88962 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88962 ']' 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88962 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88962 00:19:12.152 killing process with pid 88962 00:19:12.152 Received shutdown signal, test time was about 4.000000 seconds 00:19:12.152 00:19:12.152 Latency(us) 00:19:12.152 [2024-12-16T22:15:18.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.152 [2024-12-16T22:15:18.499Z] =================================================================================================================== 00:19:12.152 [2024-12-16T22:15:18.499Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88962' 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88962 00:19:12.152 22:15:18 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88962 00:19:13.537 Remove shared memory files 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:13.537 ************************************ 00:19:13.537 END TEST ftl_bdevperf 00:19:13.537 ************************************ 00:19:13.537 00:19:13.537 real 0m22.883s 00:19:13.537 user 0m25.450s 00:19:13.537 sys 0m1.067s 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.537 22:15:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 22:15:19 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:13.537 22:15:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:13.537 22:15:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.537 22:15:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:13.537 ************************************ 00:19:13.537 START TEST ftl_trim 00:19:13.537 ************************************ 00:19:13.537 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:13.537 * Looking for test storage... 00:19:13.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.537 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.537 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.537 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.799 22:15:19 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.799 --rc genhtml_branch_coverage=1 00:19:13.799 --rc genhtml_function_coverage=1 00:19:13.799 --rc genhtml_legend=1 00:19:13.799 --rc geninfo_all_blocks=1 00:19:13.799 --rc geninfo_unexecuted_blocks=1 00:19:13.799 00:19:13.799 ' 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.799 --rc genhtml_branch_coverage=1 00:19:13.799 --rc genhtml_function_coverage=1 00:19:13.799 --rc genhtml_legend=1 00:19:13.799 --rc geninfo_all_blocks=1 00:19:13.799 --rc geninfo_unexecuted_blocks=1 00:19:13.799 00:19:13.799 ' 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.799 --rc genhtml_branch_coverage=1 00:19:13.799 --rc genhtml_function_coverage=1 00:19:13.799 --rc genhtml_legend=1 00:19:13.799 --rc geninfo_all_blocks=1 00:19:13.799 --rc geninfo_unexecuted_blocks=1 00:19:13.799 00:19:13.799 ' 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.799 --rc genhtml_branch_coverage=1 00:19:13.799 --rc genhtml_function_coverage=1 00:19:13.799 --rc genhtml_legend=1 00:19:13.799 --rc geninfo_all_blocks=1 00:19:13.799 --rc geninfo_unexecuted_blocks=1 00:19:13.799 00:19:13.799 ' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89315 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89315 00:19:13.799 22:15:19 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89315 ']' 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.799 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.800 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.800 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.800 22:15:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:13.800 [2024-12-16 22:15:20.023251] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:13.800 [2024-12-16 22:15:20.023670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89315 ] 00:19:14.061 [2024-12-16 22:15:20.187085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:14.061 [2024-12-16 22:15:20.219205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.061 [2024-12-16 22:15:20.219499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.061 [2024-12-16 22:15:20.219551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.633 22:15:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.633 22:15:20 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:14.633 22:15:20 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:14.895 22:15:21 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:14.895 22:15:21 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:14.895 22:15:21 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:14.895 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:14.895 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:14.895 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:14.895 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:14.895 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:15.157 { 00:19:15.157 "name": "nvme0n1", 00:19:15.157 "aliases": [ 00:19:15.157 "193c3af8-a11c-4983-a910-d2d2ea9ac440" 00:19:15.157 ], 00:19:15.157 "product_name": "NVMe disk", 00:19:15.157 "block_size": 4096, 00:19:15.157 "num_blocks": 1310720, 00:19:15.157 "uuid": "193c3af8-a11c-4983-a910-d2d2ea9ac440", 00:19:15.157 "numa_id": -1, 00:19:15.157 "assigned_rate_limits": { 00:19:15.157 "rw_ios_per_sec": 0, 00:19:15.157 "rw_mbytes_per_sec": 0, 00:19:15.157 "r_mbytes_per_sec": 0, 00:19:15.157 "w_mbytes_per_sec": 0 00:19:15.157 }, 00:19:15.157 "claimed": true, 00:19:15.157 "claim_type": "read_many_write_one", 00:19:15.157 "zoned": false, 00:19:15.157 "supported_io_types": { 00:19:15.157 "read": true, 00:19:15.157 "write": true, 00:19:15.157 "unmap": true, 00:19:15.157 "flush": true, 00:19:15.157 "reset": true, 00:19:15.157 "nvme_admin": true, 00:19:15.157 "nvme_io": true, 00:19:15.157 "nvme_io_md": false, 00:19:15.157 "write_zeroes": true, 00:19:15.157 "zcopy": false, 00:19:15.157 "get_zone_info": false, 00:19:15.157 "zone_management": false, 00:19:15.157 "zone_append": false, 00:19:15.157 "compare": true, 00:19:15.157 "compare_and_write": false, 00:19:15.157 "abort": true, 00:19:15.157 "seek_hole": false, 00:19:15.157 "seek_data": false, 00:19:15.157 "copy": true, 00:19:15.157 "nvme_iov_md": false 00:19:15.157 }, 00:19:15.157 "driver_specific": { 00:19:15.157 "nvme": [ 00:19:15.157 { 00:19:15.157 "pci_address": "0000:00:11.0", 00:19:15.157 "trid": { 00:19:15.157 "trtype": "PCIe", 00:19:15.157 "traddr": "0000:00:11.0" 00:19:15.157 }, 00:19:15.157 "ctrlr_data": { 00:19:15.157 "cntlid": 0, 00:19:15.157 "vendor_id": "0x1b36", 00:19:15.157 "model_number": "QEMU NVMe Ctrl", 00:19:15.157 "serial_number": "12341", 00:19:15.157 "firmware_revision": "8.0.0", 00:19:15.157 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:15.157 "oacs": { 00:19:15.157 "security": 0, 00:19:15.157 "format": 1, 00:19:15.157 "firmware": 0, 00:19:15.157 "ns_manage": 1 00:19:15.157 }, 00:19:15.157 "multi_ctrlr": false, 00:19:15.157 "ana_reporting": false 00:19:15.157 }, 00:19:15.157 "vs": { 00:19:15.157 "nvme_version": "1.4" 00:19:15.157 }, 00:19:15.157 "ns_data": { 00:19:15.157 "id": 1, 00:19:15.157 "can_share": false 00:19:15.157 } 00:19:15.157 } 00:19:15.157 ], 00:19:15.157 "mp_policy": "active_passive" 00:19:15.157 } 00:19:15.157 } 00:19:15.157 ]' 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:15.157 22:15:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:15.157 22:15:21 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:15.157 22:15:21 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:15.157 22:15:21 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:15.157 22:15:21 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:15.157 22:15:21 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:15.419 22:15:21 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=8ff40592-780a-4797-bd3b-92fbd33a7428 00:19:15.419 22:15:21 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:15.419 22:15:21 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ff40592-780a-4797-bd3b-92fbd33a7428 00:19:15.680 22:15:21 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:15.941 22:15:22 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=dfc22ef2-20ad-40b8-ba21-cc123d07c506 00:19:15.941 22:15:22 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dfc22ef2-20ad-40b8-ba21-cc123d07c506 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:16.202 22:15:22 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.202 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.202 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:16.202 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:16.202 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:16.202 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:16.463 { 00:19:16.463 "name": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:16.463 "aliases": [ 00:19:16.463 "lvs/nvme0n1p0" 00:19:16.463 ], 00:19:16.463 "product_name": "Logical Volume", 00:19:16.463 "block_size": 4096, 00:19:16.463 "num_blocks": 26476544, 00:19:16.463 "uuid": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:16.463 "assigned_rate_limits": { 00:19:16.463 "rw_ios_per_sec": 0, 00:19:16.463 "rw_mbytes_per_sec": 0, 00:19:16.463 "r_mbytes_per_sec": 0, 00:19:16.463 "w_mbytes_per_sec": 0 00:19:16.463 }, 00:19:16.463 "claimed": false, 00:19:16.463 "zoned": false, 00:19:16.463 "supported_io_types": { 00:19:16.463 "read": true, 00:19:16.463 "write": true, 00:19:16.463 "unmap": true, 00:19:16.463 "flush": false, 00:19:16.463 "reset": true, 00:19:16.463 "nvme_admin": false, 00:19:16.463 "nvme_io": false, 00:19:16.463 "nvme_io_md": false, 00:19:16.463 "write_zeroes": true, 00:19:16.463 "zcopy": false, 00:19:16.463 "get_zone_info": false, 00:19:16.463 "zone_management": false, 00:19:16.463 "zone_append": false, 00:19:16.463 "compare": false, 00:19:16.463 "compare_and_write": false, 00:19:16.463 "abort": false, 00:19:16.463 "seek_hole": true, 00:19:16.463 "seek_data": true, 00:19:16.463 "copy": false, 00:19:16.463 "nvme_iov_md": false 00:19:16.463 }, 00:19:16.463 "driver_specific": { 00:19:16.463 "lvol": { 00:19:16.463 "lvol_store_uuid": "dfc22ef2-20ad-40b8-ba21-cc123d07c506", 00:19:16.463 "base_bdev": "nvme0n1", 00:19:16.463 "thin_provision": true, 00:19:16.463 "num_allocated_clusters": 0, 00:19:16.463 "snapshot": false, 00:19:16.463 "clone": false, 00:19:16.463 "esnap_clone": false 00:19:16.463 } 00:19:16.463 } 00:19:16.463 } 00:19:16.463 ]' 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:16.463 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:16.463 22:15:22 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:16.463 22:15:22 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:16.463 22:15:22 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:16.725 22:15:22 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:16.725 22:15:22 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:16.725 22:15:22 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.725 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.725 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:16.725 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:16.725 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:16.725 22:15:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:16.987 { 00:19:16.987 "name": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:16.987 "aliases": [ 00:19:16.987 "lvs/nvme0n1p0" 00:19:16.987 ], 00:19:16.987 "product_name": "Logical Volume", 00:19:16.987 "block_size": 4096, 00:19:16.987 "num_blocks": 26476544, 00:19:16.987 "uuid": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:16.987 "assigned_rate_limits": { 00:19:16.987 "rw_ios_per_sec": 0, 00:19:16.987 "rw_mbytes_per_sec": 0, 00:19:16.987 "r_mbytes_per_sec": 0, 00:19:16.987 "w_mbytes_per_sec": 0 00:19:16.987 }, 00:19:16.987 "claimed": false, 00:19:16.987 "zoned": false, 00:19:16.987 "supported_io_types": { 00:19:16.987 "read": true, 00:19:16.987 "write": true, 00:19:16.987 "unmap": true, 00:19:16.987 "flush": false, 00:19:16.987 "reset": true, 00:19:16.987 "nvme_admin": false, 00:19:16.987 "nvme_io": false, 00:19:16.987 "nvme_io_md": false, 00:19:16.987 "write_zeroes": true, 00:19:16.987 "zcopy": false, 00:19:16.987 "get_zone_info": false, 00:19:16.987 "zone_management": false, 00:19:16.987 "zone_append": false, 00:19:16.987 "compare": false, 00:19:16.987 "compare_and_write": false, 00:19:16.987 "abort": false, 00:19:16.987 "seek_hole": true, 00:19:16.987 "seek_data": true, 00:19:16.987 "copy": false, 00:19:16.987 "nvme_iov_md": false 00:19:16.987 }, 00:19:16.987 "driver_specific": { 00:19:16.987 "lvol": { 00:19:16.987 "lvol_store_uuid": "dfc22ef2-20ad-40b8-ba21-cc123d07c506", 00:19:16.987 "base_bdev": "nvme0n1", 00:19:16.987 "thin_provision": true, 00:19:16.987 "num_allocated_clusters": 0, 00:19:16.987 "snapshot": false, 00:19:16.987 "clone": false, 00:19:16.987 "esnap_clone": false 00:19:16.987 } 00:19:16.987 } 00:19:16.987 } 00:19:16.987 ]' 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:16.987 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:16.987 22:15:23 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:16.987 22:15:23 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:17.248 22:15:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:17.248 22:15:23 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:17.248 22:15:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:17.248 { 00:19:17.248 "name": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:17.248 "aliases": [ 00:19:17.248 "lvs/nvme0n1p0" 00:19:17.248 ], 00:19:17.248 "product_name": "Logical Volume", 00:19:17.248 "block_size": 4096, 00:19:17.248 "num_blocks": 26476544, 00:19:17.248 "uuid": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:17.248 "assigned_rate_limits": { 00:19:17.248 "rw_ios_per_sec": 0, 00:19:17.248 "rw_mbytes_per_sec": 0, 00:19:17.248 "r_mbytes_per_sec": 0, 00:19:17.248 "w_mbytes_per_sec": 0 00:19:17.248 }, 00:19:17.248 "claimed": false, 00:19:17.248 "zoned": false, 00:19:17.248 "supported_io_types": { 00:19:17.248 "read": true, 00:19:17.248 "write": true, 00:19:17.248 "unmap": true, 00:19:17.248 "flush": false, 00:19:17.248 "reset": true, 00:19:17.248 "nvme_admin": false, 00:19:17.248 "nvme_io": false, 00:19:17.248 "nvme_io_md": false, 00:19:17.248 "write_zeroes": true, 00:19:17.248 "zcopy": false, 00:19:17.248 "get_zone_info": false, 00:19:17.248 "zone_management": false, 00:19:17.248 "zone_append": false, 00:19:17.248 "compare": false, 00:19:17.248 "compare_and_write": false, 00:19:17.248 "abort": false, 00:19:17.248 "seek_hole": true, 00:19:17.248 "seek_data": true, 00:19:17.248 "copy": false, 00:19:17.248 "nvme_iov_md": false 00:19:17.248 }, 00:19:17.248 "driver_specific": { 00:19:17.248 "lvol": { 00:19:17.248 "lvol_store_uuid": "dfc22ef2-20ad-40b8-ba21-cc123d07c506", 00:19:17.248 "base_bdev": "nvme0n1", 00:19:17.248 "thin_provision": true, 00:19:17.248 "num_allocated_clusters": 0, 00:19:17.248 "snapshot": false, 00:19:17.248 "clone": false, 00:19:17.248 "esnap_clone": false 00:19:17.248 } 00:19:17.248 } 00:19:17.248 } 00:19:17.248 ]' 00:19:17.248 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:17.514 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:17.514 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:17.514 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:17.514 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:17.514 22:15:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:17.514 22:15:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:17.514 22:15:23 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b25e7e99-4719-40f3-afe1-ecbeddd0c5a8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:17.514 [2024-12-16 22:15:23.826296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.514 [2024-12-16 22:15:23.826437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.514 [2024-12-16 22:15:23.826454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.514 [2024-12-16 22:15:23.826472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.514 [2024-12-16 22:15:23.828360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.514 [2024-12-16 22:15:23.828391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.514 [2024-12-16 22:15:23.828398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:19:17.514 [2024-12-16 22:15:23.828407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.514 [2024-12-16 22:15:23.828480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.514 [2024-12-16 22:15:23.828659] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.514 [2024-12-16 22:15:23.828669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.514 [2024-12-16 22:15:23.828676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.515 [2024-12-16 22:15:23.828684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:17.515 [2024-12-16 22:15:23.828693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.828765] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:19:17.515 [2024-12-16 22:15:23.829742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.829853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:17.515 [2024-12-16 22:15:23.829868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:17.515 [2024-12-16 22:15:23.829874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.834635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.834673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.515 [2024-12-16 22:15:23.834681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:19:17.515 [2024-12-16 22:15:23.834687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.834778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.834786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.515 [2024-12-16 22:15:23.834794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:17.515 [2024-12-16 22:15:23.834801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.834826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.834832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.515 [2024-12-16 22:15:23.834850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.515 [2024-12-16 22:15:23.834856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.834887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:17.515 [2024-12-16 22:15:23.836123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.836226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.515 [2024-12-16 22:15:23.836247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:19:17.515 [2024-12-16 22:15:23.836254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.836289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.836297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.515 [2024-12-16 22:15:23.836303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:17.515 [2024-12-16 22:15:23.836312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.836335] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:17.515 [2024-12-16 22:15:23.836455] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.515 [2024-12-16 22:15:23.836464] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.515 [2024-12-16 22:15:23.836474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:17.515 [2024-12-16 22:15:23.836482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836490] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836496] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:17.515 [2024-12-16 22:15:23.836503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.515 [2024-12-16 22:15:23.836508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.515 [2024-12-16 22:15:23.836515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.515 [2024-12-16 22:15:23.836523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.836529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.515 [2024-12-16 22:15:23.836535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:17.515 [2024-12-16 22:15:23.836542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.836612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.515 [2024-12-16 22:15:23.836621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.515 [2024-12-16 22:15:23.836627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:17.515 [2024-12-16 22:15:23.836633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.515 [2024-12-16 22:15:23.836723] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.515 [2024-12-16 22:15:23.836732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.515 [2024-12-16 22:15:23.836739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.515 [2024-12-16 22:15:23.836768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.515 [2024-12-16 22:15:23.836784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.515 [2024-12-16 22:15:23.836795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.515 [2024-12-16 22:15:23.836802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:17.515 [2024-12-16 22:15:23.836807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.515 [2024-12-16 22:15:23.836815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.515 [2024-12-16 22:15:23.836823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:17.515 [2024-12-16 22:15:23.836830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.515 [2024-12-16 22:15:23.836853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.515 [2024-12-16 22:15:23.836873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.515 [2024-12-16 22:15:23.836893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.515 [2024-12-16 22:15:23.836912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.515 [2024-12-16 22:15:23.836935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.515 [2024-12-16 22:15:23.836948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.515 [2024-12-16 22:15:23.836953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:17.515 [2024-12-16 22:15:23.836961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.515 [2024-12-16 22:15:23.836966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.515 [2024-12-16 22:15:23.836973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:17.515 [2024-12-16 22:15:23.836979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.515 [2024-12-16 22:15:23.836986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.515 [2024-12-16 22:15:23.836992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:17.515 [2024-12-16 22:15:23.836999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.837004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.515 [2024-12-16 22:15:23.837011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:17.515 [2024-12-16 22:15:23.837016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.837024] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.515 [2024-12-16 22:15:23.837030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.515 [2024-12-16 22:15:23.837039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.515 [2024-12-16 22:15:23.837046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.515 [2024-12-16 22:15:23.837061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.515 [2024-12-16 22:15:23.837067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.515 [2024-12-16 22:15:23.837074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.515 [2024-12-16 22:15:23.837080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.515 [2024-12-16 22:15:23.837087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.515 [2024-12-16 22:15:23.837092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.515 [2024-12-16 22:15:23.837101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.515 [2024-12-16 22:15:23.837109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.515 [2024-12-16 22:15:23.837117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:17.515 [2024-12-16 22:15:23.837124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:17.515 [2024-12-16 22:15:23.837132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:17.515 [2024-12-16 22:15:23.837138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:17.516 [2024-12-16 22:15:23.837145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:17.516 [2024-12-16 22:15:23.837151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:17.516 [2024-12-16 22:15:23.837160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:17.516 [2024-12-16 22:15:23.837166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:17.516 [2024-12-16 22:15:23.837174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:17.516 [2024-12-16 22:15:23.837179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:17.516 [2024-12-16 22:15:23.837213] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.516 [2024-12-16 22:15:23.837220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.516 [2024-12-16 22:15:23.837233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.516 [2024-12-16 22:15:23.837239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.516 [2024-12-16 22:15:23.837244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.516 [2024-12-16 22:15:23.837251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.516 [2024-12-16 22:15:23.837257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.516 [2024-12-16 22:15:23.837266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:17.516 [2024-12-16 22:15:23.837272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.516 [2024-12-16 22:15:23.837325] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:17.516 [2024-12-16 22:15:23.837332] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:20.096 [2024-12-16 22:15:26.186120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.186177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:20.096 [2024-12-16 22:15:26.186196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2348.781 ms 00:19:20.096 [2024-12-16 22:15:26.186205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.194589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.194627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.096 [2024-12-16 22:15:26.194641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.297 ms 00:19:20.096 [2024-12-16 22:15:26.194662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.194800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.194811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.096 [2024-12-16 22:15:26.194821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:20.096 [2024-12-16 22:15:26.194831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.218792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.219096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.096 [2024-12-16 22:15:26.219139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.880 ms 00:19:20.096 [2024-12-16 22:15:26.219157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.219301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.219324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.096 [2024-12-16 22:15:26.219350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.096 [2024-12-16 22:15:26.219364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.219803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.219869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.096 [2024-12-16 22:15:26.219895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:19:20.096 [2024-12-16 22:15:26.219912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.220163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.220190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.096 [2024-12-16 22:15:26.220210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:19:20.096 [2024-12-16 22:15:26.220244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.226318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.226441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.096 [2024-12-16 22:15:26.226459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.023 ms 00:19:20.096 [2024-12-16 22:15:26.226467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.234725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.096 [2024-12-16 22:15:26.249027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.249060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.096 [2024-12-16 22:15:26.249080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.467 ms 00:19:20.096 [2024-12-16 22:15:26.249089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.307657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.307696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:20.096 [2024-12-16 22:15:26.307707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.482 ms 00:19:20.096 [2024-12-16 22:15:26.307729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.307915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.307927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.096 [2024-12-16 22:15:26.307935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:20.096 [2024-12-16 22:15:26.307944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.311142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.311178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:20.096 [2024-12-16 22:15:26.311188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:19:20.096 [2024-12-16 22:15:26.311198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.313382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.313412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:20.096 [2024-12-16 22:15:26.313421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:19:20.096 [2024-12-16 22:15:26.313430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.313747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.313763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.096 [2024-12-16 22:15:26.313772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:20.096 [2024-12-16 22:15:26.313783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.341275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.341310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:20.096 [2024-12-16 22:15:26.341321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.461 ms 00:19:20.096 [2024-12-16 22:15:26.341332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.096 [2024-12-16 22:15:26.345062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.096 [2024-12-16 22:15:26.345096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:20.096 [2024-12-16 22:15:26.345106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:19:20.097 [2024-12-16 22:15:26.345117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-12-16 22:15:26.348054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-12-16 22:15:26.348087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:20.097 [2024-12-16 22:15:26.348096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:19:20.097 [2024-12-16 22:15:26.348104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-12-16 22:15:26.351394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-12-16 22:15:26.351428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.097 [2024-12-16 22:15:26.351436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:19:20.097 [2024-12-16 22:15:26.351447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-12-16 22:15:26.351509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-12-16 22:15:26.351521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.097 [2024-12-16 22:15:26.351529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.097 [2024-12-16 22:15:26.351538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-12-16 22:15:26.351609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-12-16 22:15:26.351620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.097 [2024-12-16 22:15:26.351628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:20.097 [2024-12-16 22:15:26.351637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-12-16 22:15:26.352477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.097 [2024-12-16 22:15:26.353462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2525.897 ms, result 0 00:19:20.097 [2024-12-16 22:15:26.354178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.097 { 00:19:20.097 "name": "ftl0", 00:19:20.097 "uuid": "14ef5026-cac1-4684-8b7f-e1ccdf91ad2b" 00:19:20.097 } 00:19:20.097 22:15:26 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.097 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:20.357 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:20.616 [ 00:19:20.616 { 00:19:20.616 "name": "ftl0", 00:19:20.616 "aliases": [ 00:19:20.616 "14ef5026-cac1-4684-8b7f-e1ccdf91ad2b" 00:19:20.616 ], 00:19:20.616 "product_name": "FTL disk", 00:19:20.616 "block_size": 4096, 00:19:20.616 "num_blocks": 23592960, 00:19:20.616 "uuid": "14ef5026-cac1-4684-8b7f-e1ccdf91ad2b", 00:19:20.616 "assigned_rate_limits": { 00:19:20.616 "rw_ios_per_sec": 0, 00:19:20.616 "rw_mbytes_per_sec": 0, 00:19:20.616 "r_mbytes_per_sec": 0, 00:19:20.616 "w_mbytes_per_sec": 0 00:19:20.616 }, 00:19:20.616 "claimed": false, 00:19:20.616 "zoned": false, 00:19:20.616 "supported_io_types": { 00:19:20.616 "read": true, 00:19:20.616 "write": true, 00:19:20.616 "unmap": true, 00:19:20.616 "flush": true, 00:19:20.616 "reset": false, 00:19:20.616 "nvme_admin": false, 00:19:20.616 "nvme_io": false, 00:19:20.616 "nvme_io_md": false, 00:19:20.616 "write_zeroes": true, 00:19:20.616 "zcopy": false, 00:19:20.616 "get_zone_info": false, 00:19:20.616 "zone_management": false, 00:19:20.616 "zone_append": false, 00:19:20.616 "compare": false, 00:19:20.616 "compare_and_write": false, 00:19:20.616 "abort": false, 00:19:20.616 "seek_hole": false, 00:19:20.616 "seek_data": false, 00:19:20.616 "copy": false, 00:19:20.616 "nvme_iov_md": false 00:19:20.616 }, 00:19:20.616 "driver_specific": { 00:19:20.616 "ftl": { 00:19:20.616 "base_bdev": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:20.616 "cache": "nvc0n1p0" 00:19:20.616 } 00:19:20.616 } 00:19:20.616 } 00:19:20.616 ] 00:19:20.616 22:15:26 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:20.616 22:15:26 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:20.616 22:15:26 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:20.874 22:15:26 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:20.874 22:15:26 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:20.874 22:15:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:20.874 { 00:19:20.874 "name": "ftl0", 00:19:20.874 "aliases": [ 00:19:20.874 "14ef5026-cac1-4684-8b7f-e1ccdf91ad2b" 00:19:20.874 ], 00:19:20.874 "product_name": "FTL disk", 00:19:20.874 "block_size": 4096, 00:19:20.874 "num_blocks": 23592960, 00:19:20.874 "uuid": "14ef5026-cac1-4684-8b7f-e1ccdf91ad2b", 00:19:20.874 "assigned_rate_limits": { 00:19:20.874 "rw_ios_per_sec": 0, 00:19:20.874 "rw_mbytes_per_sec": 0, 00:19:20.874 "r_mbytes_per_sec": 0, 00:19:20.874 "w_mbytes_per_sec": 0 00:19:20.874 }, 00:19:20.874 "claimed": false, 00:19:20.874 "zoned": false, 00:19:20.874 "supported_io_types": { 00:19:20.874 "read": true, 00:19:20.874 "write": true, 00:19:20.874 "unmap": true, 00:19:20.874 "flush": true, 00:19:20.874 "reset": false, 00:19:20.874 "nvme_admin": false, 00:19:20.874 "nvme_io": false, 00:19:20.874 "nvme_io_md": false, 00:19:20.874 "write_zeroes": true, 00:19:20.874 "zcopy": false, 00:19:20.874 "get_zone_info": false, 00:19:20.874 "zone_management": false, 00:19:20.874 "zone_append": false, 00:19:20.874 "compare": false, 00:19:20.874 "compare_and_write": false, 00:19:20.874 "abort": false, 00:19:20.874 "seek_hole": false, 00:19:20.874 "seek_data": false, 00:19:20.874 "copy": false, 00:19:20.874 "nvme_iov_md": false 00:19:20.874 }, 00:19:20.874 "driver_specific": { 00:19:20.874 "ftl": { 00:19:20.874 "base_bdev": "b25e7e99-4719-40f3-afe1-ecbeddd0c5a8", 00:19:20.874 "cache": "nvc0n1p0" 00:19:20.874 } 00:19:20.874 } 00:19:20.874 } 00:19:20.874 ]' 00:19:20.874 22:15:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:20.874 22:15:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:20.874 22:15:27 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:21.134 [2024-12-16 22:15:27.378507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.378541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:21.134 [2024-12-16 22:15:27.378553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:21.134 [2024-12-16 22:15:27.378559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.378591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:21.134 [2024-12-16 22:15:27.379014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.379031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:21.134 [2024-12-16 22:15:27.379038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:19:21.134 [2024-12-16 22:15:27.379046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.379518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.379543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:21.134 [2024-12-16 22:15:27.379551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:19:21.134 [2024-12-16 22:15:27.379559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.382269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.382288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:21.134 [2024-12-16 22:15:27.382303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:19:21.134 [2024-12-16 22:15:27.382311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.387438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.387468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:21.134 [2024-12-16 22:15:27.387475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.081 ms 00:19:21.134 [2024-12-16 22:15:27.387485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.389080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.389184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.134 [2024-12-16 22:15:27.389196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:19:21.134 [2024-12-16 22:15:27.389203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.393183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.393217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.134 [2024-12-16 22:15:27.393225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:19:21.134 [2024-12-16 22:15:27.393243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.393403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.393413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.134 [2024-12-16 22:15:27.393419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:21.134 [2024-12-16 22:15:27.393426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.395062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.395091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.134 [2024-12-16 22:15:27.395098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:19:21.134 [2024-12-16 22:15:27.395106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.396581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.396677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.134 [2024-12-16 22:15:27.396688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:19:21.134 [2024-12-16 22:15:27.396696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.398887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.398996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.134 [2024-12-16 22:15:27.399027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:19:21.134 [2024-12-16 22:15:27.399051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.401085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.134 [2024-12-16 22:15:27.401163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.134 [2024-12-16 22:15:27.401187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:19:21.134 [2024-12-16 22:15:27.401210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.134 [2024-12-16 22:15:27.401295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.134 [2024-12-16 22:15:27.401334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.401983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.134 [2024-12-16 22:15:27.402116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.402980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.135 [2024-12-16 22:15:27.403752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.135 [2024-12-16 22:15:27.403774] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:19:21.135 [2024-12-16 22:15:27.403799] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.135 [2024-12-16 22:15:27.403819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.135 [2024-12-16 22:15:27.403865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.135 [2024-12-16 22:15:27.403886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.135 [2024-12-16 22:15:27.403908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.136 [2024-12-16 22:15:27.403929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.136 [2024-12-16 22:15:27.403951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.136 [2024-12-16 22:15:27.403969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.136 [2024-12-16 22:15:27.403990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.136 [2024-12-16 22:15:27.404008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.136 [2024-12-16 22:15:27.404032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.136 [2024-12-16 22:15:27.404053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:19:21.136 [2024-12-16 22:15:27.404080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.406933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.136 [2024-12-16 22:15:27.407220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.136 [2024-12-16 22:15:27.407258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:19:21.136 [2024-12-16 22:15:27.407284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.407439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.136 [2024-12-16 22:15:27.407477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.136 [2024-12-16 22:15:27.407501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:21.136 [2024-12-16 22:15:27.407524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.414117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.414230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.136 [2024-12-16 22:15:27.414246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.414255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.414353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.414370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.136 [2024-12-16 22:15:27.414378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.414389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.414456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.414470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.136 [2024-12-16 22:15:27.414477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.414486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.414524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.414533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.136 [2024-12-16 22:15:27.414541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.414549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.423858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.423893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.136 [2024-12-16 22:15:27.423903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.423911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.431619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.431657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.136 [2024-12-16 22:15:27.431667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.431679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.431749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.431771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.136 [2024-12-16 22:15:27.431781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.431800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.431877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.431888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.136 [2024-12-16 22:15:27.431896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.431904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.431986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.431997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.136 [2024-12-16 22:15:27.432005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.432016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.432062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.432073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.136 [2024-12-16 22:15:27.432081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.432091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.432148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.432163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.136 [2024-12-16 22:15:27.432170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.432190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.432246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.136 [2024-12-16 22:15:27.432267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.136 [2024-12-16 22:15:27.432276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.136 [2024-12-16 22:15:27.432286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.136 [2024-12-16 22:15:27.432456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.929 ms, result 0 00:19:21.136 true 00:19:21.136 22:15:27 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89315 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89315 ']' 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89315 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89315 00:19:21.136 killing process with pid 89315 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89315' 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89315 00:19:21.136 22:15:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89315 00:19:26.403 22:15:32 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:26.974 65536+0 records in 00:19:26.974 65536+0 records out 00:19:26.974 268435456 bytes (268 MB, 256 MiB) copied, 0.802915 s, 334 MB/s 00:19:26.974 22:15:33 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:26.974 [2024-12-16 22:15:33.109991] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:26.974 [2024-12-16 22:15:33.110105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89470 ] 00:19:26.974 [2024-12-16 22:15:33.269339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.974 [2024-12-16 22:15:33.298016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.236 [2024-12-16 22:15:33.413344] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.236 [2024-12-16 22:15:33.413428] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.236 [2024-12-16 22:15:33.574397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.236 [2024-12-16 22:15:33.574457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.236 [2024-12-16 22:15:33.574477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.236 [2024-12-16 22:15:33.574486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.236 [2024-12-16 22:15:33.577038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.236 [2024-12-16 22:15:33.577084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.236 [2024-12-16 22:15:33.577096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:19:27.236 [2024-12-16 22:15:33.577103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.236 [2024-12-16 22:15:33.577206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.236 [2024-12-16 22:15:33.577472] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.236 [2024-12-16 22:15:33.577490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.236 [2024-12-16 22:15:33.577498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.236 [2024-12-16 22:15:33.577507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:19:27.236 [2024-12-16 22:15:33.577514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.236 [2024-12-16 22:15:33.579354] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:27.498 [2024-12-16 22:15:33.583136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.583186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:27.498 [2024-12-16 22:15:33.583203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:19:27.498 [2024-12-16 22:15:33.583212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.583289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.583300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:27.498 [2024-12-16 22:15:33.583309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:27.498 [2024-12-16 22:15:33.583317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.591135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.591183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.498 [2024-12-16 22:15:33.591193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.756 ms 00:19:27.498 [2024-12-16 22:15:33.591201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.591344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.591355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.498 [2024-12-16 22:15:33.591365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:27.498 [2024-12-16 22:15:33.591376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.591405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.591414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.498 [2024-12-16 22:15:33.591422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:27.498 [2024-12-16 22:15:33.591433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.591461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:27.498 [2024-12-16 22:15:33.593529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.593725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.498 [2024-12-16 22:15:33.593745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:19:27.498 [2024-12-16 22:15:33.593759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.593813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.593825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.498 [2024-12-16 22:15:33.593876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:27.498 [2024-12-16 22:15:33.593886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.498 [2024-12-16 22:15:33.593909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:27.498 [2024-12-16 22:15:33.593933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:27.498 [2024-12-16 22:15:33.593968] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:27.498 [2024-12-16 22:15:33.593990] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:27.498 [2024-12-16 22:15:33.594096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.498 [2024-12-16 22:15:33.594107] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.498 [2024-12-16 22:15:33.594118] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:27.498 [2024-12-16 22:15:33.594128] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.498 [2024-12-16 22:15:33.594138] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.498 [2024-12-16 22:15:33.594147] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:27.498 [2024-12-16 22:15:33.594154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.498 [2024-12-16 22:15:33.594162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.498 [2024-12-16 22:15:33.594172] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.498 [2024-12-16 22:15:33.594181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.498 [2024-12-16 22:15:33.594193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.499 [2024-12-16 22:15:33.594205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:27.499 [2024-12-16 22:15:33.594212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.594300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.594310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.499 [2024-12-16 22:15:33.594319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:27.499 [2024-12-16 22:15:33.594327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.594427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.499 [2024-12-16 22:15:33.594449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.499 [2024-12-16 22:15:33.594459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.499 [2024-12-16 22:15:33.594486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.499 [2024-12-16 22:15:33.594514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.499 [2024-12-16 22:15:33.594530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.499 [2024-12-16 22:15:33.594538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:27.499 [2024-12-16 22:15:33.594546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.499 [2024-12-16 22:15:33.594554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.499 [2024-12-16 22:15:33.594562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:27.499 [2024-12-16 22:15:33.594570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.499 [2024-12-16 22:15:33.594586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.499 [2024-12-16 22:15:33.594613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.499 [2024-12-16 22:15:33.594642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.499 [2024-12-16 22:15:33.594662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.499 [2024-12-16 22:15:33.594683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.499 [2024-12-16 22:15:33.594704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.499 [2024-12-16 22:15:33.594716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.499 [2024-12-16 22:15:33.594723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:27.499 [2024-12-16 22:15:33.594730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.499 [2024-12-16 22:15:33.594736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.499 [2024-12-16 22:15:33.594743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:27.499 [2024-12-16 22:15:33.594752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.499 [2024-12-16 22:15:33.594766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:27.499 [2024-12-16 22:15:33.594773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.499 [2024-12-16 22:15:33.594788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.499 [2024-12-16 22:15:33.594795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.499 [2024-12-16 22:15:33.594809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.499 [2024-12-16 22:15:33.594817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.499 [2024-12-16 22:15:33.594823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.499 [2024-12-16 22:15:33.594832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.499 [2024-12-16 22:15:33.594860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.499 [2024-12-16 22:15:33.594868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.499 [2024-12-16 22:15:33.594878] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.499 [2024-12-16 22:15:33.594891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.594902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:27.499 [2024-12-16 22:15:33.594910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:27.499 [2024-12-16 22:15:33.594918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:27.499 [2024-12-16 22:15:33.594926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:27.499 [2024-12-16 22:15:33.594934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:27.499 [2024-12-16 22:15:33.594941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:27.499 [2024-12-16 22:15:33.594949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:27.499 [2024-12-16 22:15:33.594956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:27.499 [2024-12-16 22:15:33.594964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:27.499 [2024-12-16 22:15:33.594972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.594979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.594986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.594994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.595003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:27.499 [2024-12-16 22:15:33.595011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.499 [2024-12-16 22:15:33.595023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.595034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.499 [2024-12-16 22:15:33.595042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.499 [2024-12-16 22:15:33.595049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.499 [2024-12-16 22:15:33.595056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.499 [2024-12-16 22:15:33.595064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.595071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.499 [2024-12-16 22:15:33.595078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:19:27.499 [2024-12-16 22:15:33.595092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.608790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.608995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.499 [2024-12-16 22:15:33.609016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.646 ms 00:19:27.499 [2024-12-16 22:15:33.609027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.609161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.609179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.499 [2024-12-16 22:15:33.609189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:27.499 [2024-12-16 22:15:33.609201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.629469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.629524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.499 [2024-12-16 22:15:33.629537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.243 ms 00:19:27.499 [2024-12-16 22:15:33.629545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.629643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.629656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.499 [2024-12-16 22:15:33.629667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.499 [2024-12-16 22:15:33.629675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.499 [2024-12-16 22:15:33.630277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.499 [2024-12-16 22:15:33.630310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.500 [2024-12-16 22:15:33.630324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:27.500 [2024-12-16 22:15:33.630334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.630503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.630526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.500 [2024-12-16 22:15:33.630537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:27.500 [2024-12-16 22:15:33.630552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.638905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.638949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.500 [2024-12-16 22:15:33.638969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.321 ms 00:19:27.500 [2024-12-16 22:15:33.638980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.642852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:27.500 [2024-12-16 22:15:33.642897] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:27.500 [2024-12-16 22:15:33.642910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.642918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:27.500 [2024-12-16 22:15:33.642926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:19:27.500 [2024-12-16 22:15:33.642933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.658656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.658700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:27.500 [2024-12-16 22:15:33.658720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.661 ms 00:19:27.500 [2024-12-16 22:15:33.658728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.661461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.661631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:27.500 [2024-12-16 22:15:33.661650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:19:27.500 [2024-12-16 22:15:33.661658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.664219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.664275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:27.500 [2024-12-16 22:15:33.664284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:19:27.500 [2024-12-16 22:15:33.664292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.664630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.664645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.500 [2024-12-16 22:15:33.664655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:27.500 [2024-12-16 22:15:33.664663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.688799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.689015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:27.500 [2024-12-16 22:15:33.689086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.107 ms 00:19:27.500 [2024-12-16 22:15:33.689112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.697261] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.500 [2024-12-16 22:15:33.715988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.716158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.500 [2024-12-16 22:15:33.716214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.777 ms 00:19:27.500 [2024-12-16 22:15:33.716238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.716344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.716372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:27.500 [2024-12-16 22:15:33.716399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:27.500 [2024-12-16 22:15:33.716412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.716473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.716482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.500 [2024-12-16 22:15:33.716491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:27.500 [2024-12-16 22:15:33.716498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.716527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.716537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.500 [2024-12-16 22:15:33.716546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:27.500 [2024-12-16 22:15:33.716553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.716595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:27.500 [2024-12-16 22:15:33.716606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.716615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:27.500 [2024-12-16 22:15:33.716628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:27.500 [2024-12-16 22:15:33.716636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.722257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.722307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.500 [2024-12-16 22:15:33.722319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.601 ms 00:19:27.500 [2024-12-16 22:15:33.722327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.722424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.500 [2024-12-16 22:15:33.722434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.500 [2024-12-16 22:15:33.722444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:27.500 [2024-12-16 22:15:33.722452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.500 [2024-12-16 22:15:33.723495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.500 [2024-12-16 22:15:33.724792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.790 ms, result 0 00:19:27.500 [2024-12-16 22:15:33.726175] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.500 [2024-12-16 22:15:33.733460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.441  [2024-12-16T22:15:36.174Z] Copying: 14/256 [MB] (14 MBps) [2024-12-16T22:15:36.746Z] Copying: 30/256 [MB] (16 MBps) [2024-12-16T22:15:38.131Z] Copying: 46/256 [MB] (15 MBps) [2024-12-16T22:15:39.073Z] Copying: 82/256 [MB] (36 MBps) [2024-12-16T22:15:40.017Z] Copying: 103/256 [MB] (20 MBps) [2024-12-16T22:15:40.960Z] Copying: 116/256 [MB] (13 MBps) [2024-12-16T22:15:41.902Z] Copying: 137/256 [MB] (20 MBps) [2024-12-16T22:15:42.845Z] Copying: 156/256 [MB] (19 MBps) [2024-12-16T22:15:43.788Z] Copying: 175/256 [MB] (18 MBps) [2024-12-16T22:15:45.175Z] Copying: 190/256 [MB] (15 MBps) [2024-12-16T22:15:45.747Z] Copying: 209/256 [MB] (18 MBps) [2024-12-16T22:15:47.135Z] Copying: 227/256 [MB] (18 MBps) [2024-12-16T22:15:47.135Z] Copying: 242/256 [MB] (14 MBps) [2024-12-16T22:15:47.135Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-16 22:15:47.019812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.788 [2024-12-16 22:15:47.020878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.020974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.788 [2024-12-16 22:15:47.021040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:40.788 [2024-12-16 22:15:47.021060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.021088] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:40.788 [2024-12-16 22:15:47.021485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.021570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.788 [2024-12-16 22:15:47.021614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:40.788 [2024-12-16 22:15:47.021638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.022973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.023055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.788 [2024-12-16 22:15:47.023101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:19:40.788 [2024-12-16 22:15:47.023123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.028881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.028967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.788 [2024-12-16 22:15:47.029014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:19:40.788 [2024-12-16 22:15:47.029031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.034329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.034410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:40.788 [2024-12-16 22:15:47.034468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:19:40.788 [2024-12-16 22:15:47.034489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.035710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.035790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.788 [2024-12-16 22:15:47.035848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:19:40.788 [2024-12-16 22:15:47.035865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.039208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.039292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.788 [2024-12-16 22:15:47.039344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:19:40.788 [2024-12-16 22:15:47.039360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.039461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.039481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.788 [2024-12-16 22:15:47.039496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:40.788 [2024-12-16 22:15:47.039535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.041083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.041161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:40.788 [2024-12-16 22:15:47.041202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:19:40.788 [2024-12-16 22:15:47.041219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.042568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.042645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:40.788 [2024-12-16 22:15:47.042684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:19:40.788 [2024-12-16 22:15:47.042692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.788 [2024-12-16 22:15:47.043575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.788 [2024-12-16 22:15:47.043597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.788 [2024-12-16 22:15:47.043604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:19:40.788 [2024-12-16 22:15:47.043608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.789 [2024-12-16 22:15:47.044500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.789 [2024-12-16 22:15:47.044582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.789 [2024-12-16 22:15:47.044593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:19:40.789 [2024-12-16 22:15:47.044599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.789 [2024-12-16 22:15:47.044630] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.789 [2024-12-16 22:15:47.044643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.044999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.789 [2024-12-16 22:15:47.045140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.790 [2024-12-16 22:15:47.045268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.790 [2024-12-16 22:15:47.045274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:19:40.790 [2024-12-16 22:15:47.045280] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.790 [2024-12-16 22:15:47.045285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.790 [2024-12-16 22:15:47.045291] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.790 [2024-12-16 22:15:47.045297] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.790 [2024-12-16 22:15:47.045302] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.790 [2024-12-16 22:15:47.045308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.790 [2024-12-16 22:15:47.045316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.790 [2024-12-16 22:15:47.045320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.790 [2024-12-16 22:15:47.045325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.790 [2024-12-16 22:15:47.045331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.790 [2024-12-16 22:15:47.045336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.790 [2024-12-16 22:15:47.045343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:19:40.790 [2024-12-16 22:15:47.045348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.046568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.790 [2024-12-16 22:15:47.046585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.790 [2024-12-16 22:15:47.046592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:19:40.790 [2024-12-16 22:15:47.046598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.046664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.790 [2024-12-16 22:15:47.046670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.790 [2024-12-16 22:15:47.046676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:40.790 [2024-12-16 22:15:47.046681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.050961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.051044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.790 [2024-12-16 22:15:47.051084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.051100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.051151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.051193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.790 [2024-12-16 22:15:47.051211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.051225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.051283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.051302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.790 [2024-12-16 22:15:47.051341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.051357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.051383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.051416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.790 [2024-12-16 22:15:47.051433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.051446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.058727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.058884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.790 [2024-12-16 22:15:47.058934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.058976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.064869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.064980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.790 [2024-12-16 22:15:47.065020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.790 [2024-12-16 22:15:47.065125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.790 [2024-12-16 22:15:47.065209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.790 [2024-12-16 22:15:47.065387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.790 [2024-12-16 22:15:47.065467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.790 [2024-12-16 22:15:47.065604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.790 [2024-12-16 22:15:47.065680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.790 [2024-12-16 22:15:47.065735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.790 [2024-12-16 22:15:47.065753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.790 [2024-12-16 22:15:47.065883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.976 ms, result 0 00:19:41.362 00:19:41.362 00:19:41.362 22:15:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89629 00:19:41.362 22:15:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:41.362 22:15:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89629 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89629 ']' 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.362 22:15:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:41.362 [2024-12-16 22:15:47.617743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:41.362 [2024-12-16 22:15:47.617899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89629 ] 00:19:41.623 [2024-12-16 22:15:47.774038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.623 [2024-12-16 22:15:47.791665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.195 22:15:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.195 22:15:48 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:42.195 22:15:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:42.457 [2024-12-16 22:15:48.645553] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.457 [2024-12-16 22:15:48.645603] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.457 [2024-12-16 22:15:48.788538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.457 [2024-12-16 22:15:48.788585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.457 [2024-12-16 22:15:48.788598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:42.457 [2024-12-16 22:15:48.788611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.457 [2024-12-16 22:15:48.790876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.457 [2024-12-16 22:15:48.790915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.457 [2024-12-16 22:15:48.790925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:19:42.457 [2024-12-16 22:15:48.790934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.457 [2024-12-16 22:15:48.791015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.457 [2024-12-16 22:15:48.791256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.457 [2024-12-16 22:15:48.791269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.457 [2024-12-16 22:15:48.791279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.457 [2024-12-16 22:15:48.791287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:19:42.457 [2024-12-16 22:15:48.791296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.457 [2024-12-16 22:15:48.793579] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:42.457 [2024-12-16 22:15:48.798094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.457 [2024-12-16 22:15:48.798435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:42.457 [2024-12-16 22:15:48.798493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:19:42.457 [2024-12-16 22:15:48.798518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.457 [2024-12-16 22:15:48.798665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.457 [2024-12-16 22:15:48.798696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:42.457 [2024-12-16 22:15:48.798731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:42.457 [2024-12-16 22:15:48.798764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.806441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.806470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.719 [2024-12-16 22:15:48.806480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.490 ms 00:19:42.719 [2024-12-16 22:15:48.806488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.806589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.806603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.719 [2024-12-16 22:15:48.806613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:42.719 [2024-12-16 22:15:48.806622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.806649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.806658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:42.719 [2024-12-16 22:15:48.806669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:42.719 [2024-12-16 22:15:48.806679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.806703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:42.719 [2024-12-16 22:15:48.808110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.808140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.719 [2024-12-16 22:15:48.808151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:19:42.719 [2024-12-16 22:15:48.808160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.808196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.808206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:42.719 [2024-12-16 22:15:48.808214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:42.719 [2024-12-16 22:15:48.808222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.808241] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:42.719 [2024-12-16 22:15:48.808261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:42.719 [2024-12-16 22:15:48.808299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:42.719 [2024-12-16 22:15:48.808318] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:42.719 [2024-12-16 22:15:48.808420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:42.719 [2024-12-16 22:15:48.808431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:42.719 [2024-12-16 22:15:48.808446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:42.719 [2024-12-16 22:15:48.808456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:42.719 [2024-12-16 22:15:48.808465] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:42.719 [2024-12-16 22:15:48.808476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:42.719 [2024-12-16 22:15:48.808483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:42.719 [2024-12-16 22:15:48.808492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:42.719 [2024-12-16 22:15:48.808501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:42.719 [2024-12-16 22:15:48.808510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.808517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:42.719 [2024-12-16 22:15:48.808526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:42.719 [2024-12-16 22:15:48.808532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.719 [2024-12-16 22:15:48.808620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.719 [2024-12-16 22:15:48.808627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:42.720 [2024-12-16 22:15:48.808636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:42.720 [2024-12-16 22:15:48.808643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.808746] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:42.720 [2024-12-16 22:15:48.808756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:42.720 [2024-12-16 22:15:48.808767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.720 [2024-12-16 22:15:48.808776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:42.720 [2024-12-16 22:15:48.808801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:42.720 [2024-12-16 22:15:48.808821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:42.720 [2024-12-16 22:15:48.808830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.720 [2024-12-16 22:15:48.808863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:42.720 [2024-12-16 22:15:48.808871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:42.720 [2024-12-16 22:15:48.808880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.720 [2024-12-16 22:15:48.808888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:42.720 [2024-12-16 22:15:48.808898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:42.720 [2024-12-16 22:15:48.808908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:42.720 [2024-12-16 22:15:48.808951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:42.720 [2024-12-16 22:15:48.808960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:42.720 [2024-12-16 22:15:48.808980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:42.720 [2024-12-16 22:15:48.808988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.720 [2024-12-16 22:15:48.808997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:42.720 [2024-12-16 22:15:48.809005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.720 [2024-12-16 22:15:48.809021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:42.720 [2024-12-16 22:15:48.809031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.720 [2024-12-16 22:15:48.809047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:42.720 [2024-12-16 22:15:48.809061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.720 [2024-12-16 22:15:48.809079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:42.720 [2024-12-16 22:15:48.809088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.720 [2024-12-16 22:15:48.809105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:42.720 [2024-12-16 22:15:48.809112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:42.720 [2024-12-16 22:15:48.809123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.720 [2024-12-16 22:15:48.809131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:42.720 [2024-12-16 22:15:48.809140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:42.720 [2024-12-16 22:15:48.809148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:42.720 [2024-12-16 22:15:48.809166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:42.720 [2024-12-16 22:15:48.809174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809182] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:42.720 [2024-12-16 22:15:48.809195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:42.720 [2024-12-16 22:15:48.809203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.720 [2024-12-16 22:15:48.809213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.720 [2024-12-16 22:15:48.809224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:42.720 [2024-12-16 22:15:48.809233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:42.720 [2024-12-16 22:15:48.809239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:42.720 [2024-12-16 22:15:48.809248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:42.720 [2024-12-16 22:15:48.809254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:42.720 [2024-12-16 22:15:48.809265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:42.720 [2024-12-16 22:15:48.809273] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:42.720 [2024-12-16 22:15:48.809284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:42.720 [2024-12-16 22:15:48.809301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:42.720 [2024-12-16 22:15:48.809308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:42.720 [2024-12-16 22:15:48.809316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:42.720 [2024-12-16 22:15:48.809323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:42.720 [2024-12-16 22:15:48.809331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:42.720 [2024-12-16 22:15:48.809338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:42.720 [2024-12-16 22:15:48.809346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:42.720 [2024-12-16 22:15:48.809353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:42.720 [2024-12-16 22:15:48.809362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:42.720 [2024-12-16 22:15:48.809407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:42.720 [2024-12-16 22:15:48.809418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:42.720 [2024-12-16 22:15:48.809440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:42.720 [2024-12-16 22:15:48.809448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:42.720 [2024-12-16 22:15:48.809457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:42.720 [2024-12-16 22:15:48.809464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.809474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:42.720 [2024-12-16 22:15:48.809482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:19:42.720 [2024-12-16 22:15:48.809491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.818740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.818774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.720 [2024-12-16 22:15:48.818785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.190 ms 00:19:42.720 [2024-12-16 22:15:48.818795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.818928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.818943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.720 [2024-12-16 22:15:48.818951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:42.720 [2024-12-16 22:15:48.818960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.827935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.828335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.720 [2024-12-16 22:15:48.828359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.957 ms 00:19:42.720 [2024-12-16 22:15:48.828382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.828441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.828454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.720 [2024-12-16 22:15:48.828463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:42.720 [2024-12-16 22:15:48.828473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.720 [2024-12-16 22:15:48.828828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.720 [2024-12-16 22:15:48.828886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.720 [2024-12-16 22:15:48.828900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:19:42.720 [2024-12-16 22:15:48.828916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.829068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.829084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.721 [2024-12-16 22:15:48.829093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:42.721 [2024-12-16 22:15:48.829104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.834743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.834773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.721 [2024-12-16 22:15:48.834782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.619 ms 00:19:42.721 [2024-12-16 22:15:48.834791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.845005] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:42.721 [2024-12-16 22:15:48.845039] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:42.721 [2024-12-16 22:15:48.845052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.845062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:42.721 [2024-12-16 22:15:48.845071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.154 ms 00:19:42.721 [2024-12-16 22:15:48.845081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.860514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.860732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:42.721 [2024-12-16 22:15:48.860916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.386 ms 00:19:42.721 [2024-12-16 22:15:48.861057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.864282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.864830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:42.721 [2024-12-16 22:15:48.865255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.092 ms 00:19:42.721 [2024-12-16 22:15:48.865343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.869033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.869467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:42.721 [2024-12-16 22:15:48.869760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:19:42.721 [2024-12-16 22:15:48.870156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.871366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.871585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.721 [2024-12-16 22:15:48.871888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:19:42.721 [2024-12-16 22:15:48.871964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.892247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.892386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:42.721 [2024-12-16 22:15:48.892441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:19:42.721 [2024-12-16 22:15:48.892456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.900097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:42.721 [2024-12-16 22:15:48.915666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.915706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.721 [2024-12-16 22:15:48.915721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.135 ms 00:19:42.721 [2024-12-16 22:15:48.915729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.915809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.915822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:42.721 [2024-12-16 22:15:48.915833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:42.721 [2024-12-16 22:15:48.915857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.915913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.915922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.721 [2024-12-16 22:15:48.915932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:42.721 [2024-12-16 22:15:48.915940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.915965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.915974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.721 [2024-12-16 22:15:48.915990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.721 [2024-12-16 22:15:48.915997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.916031] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:42.721 [2024-12-16 22:15:48.916040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.916050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:42.721 [2024-12-16 22:15:48.916058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:42.721 [2024-12-16 22:15:48.916067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.920692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.920732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.721 [2024-12-16 22:15:48.920743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.604 ms 00:19:42.721 [2024-12-16 22:15:48.920760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.920849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.721 [2024-12-16 22:15:48.920865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.721 [2024-12-16 22:15:48.920874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:42.721 [2024-12-16 22:15:48.920888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.721 [2024-12-16 22:15:48.921765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:42.721 [2024-12-16 22:15:48.922910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.937 ms, result 0 00:19:42.721 [2024-12-16 22:15:48.925224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.721 Some configs were skipped because the RPC state that can call them passed over. 00:19:42.721 22:15:48 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:42.982 [2024-12-16 22:15:49.151326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.982 [2024-12-16 22:15:49.151389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:42.982 [2024-12-16 22:15:49.151413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.151 ms 00:19:42.982 [2024-12-16 22:15:49.151422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.982 [2024-12-16 22:15:49.151459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.294 ms, result 0 00:19:42.982 true 00:19:42.982 22:15:49 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:43.245 [2024-12-16 22:15:49.383069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.245 [2024-12-16 22:15:49.383138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:43.245 [2024-12-16 22:15:49.383152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:19:43.245 [2024-12-16 22:15:49.383163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.245 [2024-12-16 22:15:49.383201] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.892 ms, result 0 00:19:43.245 true 00:19:43.245 22:15:49 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89629 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89629 ']' 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89629 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89629 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.245 killing process with pid 89629 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89629' 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89629 00:19:43.245 22:15:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89629 00:19:43.245 [2024-12-16 22:15:49.556209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.245 [2024-12-16 22:15:49.556268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:43.245 [2024-12-16 22:15:49.556284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.246 [2024-12-16 22:15:49.556297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.556338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:43.246 [2024-12-16 22:15:49.556989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.557016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:43.246 [2024-12-16 22:15:49.557029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:19:43.246 [2024-12-16 22:15:49.557039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.557329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.557359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:43.246 [2024-12-16 22:15:49.557369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:43.246 [2024-12-16 22:15:49.557379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.562015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.562058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:43.246 [2024-12-16 22:15:49.562068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:19:43.246 [2024-12-16 22:15:49.562084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.569004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.569043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:43.246 [2024-12-16 22:15:49.569053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:19:43.246 [2024-12-16 22:15:49.569064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.571635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.571682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:43.246 [2024-12-16 22:15:49.571692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:19:43.246 [2024-12-16 22:15:49.571701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.576707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.576754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:43.246 [2024-12-16 22:15:49.576766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.966 ms 00:19:43.246 [2024-12-16 22:15:49.576777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.576924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.576937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:43.246 [2024-12-16 22:15:49.576945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:43.246 [2024-12-16 22:15:49.576955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.580049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.580092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:43.246 [2024-12-16 22:15:49.580102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:19:43.246 [2024-12-16 22:15:49.580114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.582647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.582693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:43.246 [2024-12-16 22:15:49.582702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:19:43.246 [2024-12-16 22:15:49.582712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.584877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.584917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.246 [2024-12-16 22:15:49.584925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:19:43.246 [2024-12-16 22:15:49.584935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.586751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.246 [2024-12-16 22:15:49.586798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.246 [2024-12-16 22:15:49.586807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:19:43.246 [2024-12-16 22:15:49.586816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.246 [2024-12-16 22:15:49.586870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.246 [2024-12-16 22:15:49.586888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.586999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.246 [2024-12-16 22:15:49.587339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.247 [2024-12-16 22:15:49.587765] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.247 [2024-12-16 22:15:49.587777] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:19:43.247 [2024-12-16 22:15:49.587798] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.247 [2024-12-16 22:15:49.587805] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.247 [2024-12-16 22:15:49.587814] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.247 [2024-12-16 22:15:49.587823] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.247 [2024-12-16 22:15:49.587847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.247 [2024-12-16 22:15:49.587860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.247 [2024-12-16 22:15:49.587869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.247 [2024-12-16 22:15:49.587876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.247 [2024-12-16 22:15:49.587884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.247 [2024-12-16 22:15:49.587891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.247 [2024-12-16 22:15:49.587904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.247 [2024-12-16 22:15:49.587913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:19:43.247 [2024-12-16 22:15:49.587924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.247 [2024-12-16 22:15:49.589895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.247 [2024-12-16 22:15:49.589921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.247 [2024-12-16 22:15:49.589932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.952 ms 00:19:43.247 [2024-12-16 22:15:49.589943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.247 [2024-12-16 22:15:49.590076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.247 [2024-12-16 22:15:49.590089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.247 [2024-12-16 22:15:49.590099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:43.247 [2024-12-16 22:15:49.590110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.596991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.597039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.508 [2024-12-16 22:15:49.597049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.597059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.597133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.597145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.508 [2024-12-16 22:15:49.597153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.597165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.597211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.597222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.508 [2024-12-16 22:15:49.597230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.597240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.597258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.597268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.508 [2024-12-16 22:15:49.597276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.597285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.610074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.610127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.508 [2024-12-16 22:15:49.610138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.610151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.508 [2024-12-16 22:15:49.619346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.508 [2024-12-16 22:15:49.619427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.508 [2024-12-16 22:15:49.619490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.508 [2024-12-16 22:15:49.619591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.508 [2024-12-16 22:15:49.619657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.508 [2024-12-16 22:15:49.619729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.508 [2024-12-16 22:15:49.619799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.508 [2024-12-16 22:15:49.619807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.508 [2024-12-16 22:15:49.619816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.508 [2024-12-16 22:15:49.619984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.748 ms, result 0 00:19:43.508 22:15:49 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:43.508 22:15:49 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.770 [2024-12-16 22:15:49.905126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:43.770 [2024-12-16 22:15:49.905281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89665 ] 00:19:43.770 [2024-12-16 22:15:50.065232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.770 [2024-12-16 22:15:50.095200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.031 [2024-12-16 22:15:50.212545] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.031 [2024-12-16 22:15:50.212642] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.031 [2024-12-16 22:15:50.372873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.031 [2024-12-16 22:15:50.372935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.031 [2024-12-16 22:15:50.372950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.031 [2024-12-16 22:15:50.372959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.031 [2024-12-16 22:15:50.375612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.031 [2024-12-16 22:15:50.375667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.031 [2024-12-16 22:15:50.375678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:19:44.031 [2024-12-16 22:15:50.375686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.031 [2024-12-16 22:15:50.375795] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.031 [2024-12-16 22:15:50.376199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.031 [2024-12-16 22:15:50.376245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.031 [2024-12-16 22:15:50.376253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.031 [2024-12-16 22:15:50.376263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:19:44.031 [2024-12-16 22:15:50.376270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.031 [2024-12-16 22:15:50.378032] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:44.293 [2024-12-16 22:15:50.381684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.381759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:44.293 [2024-12-16 22:15:50.381778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:19:44.293 [2024-12-16 22:15:50.381786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.381900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.381913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:44.293 [2024-12-16 22:15:50.381922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:44.293 [2024-12-16 22:15:50.381930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.389933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.389976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.293 [2024-12-16 22:15:50.389986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.953 ms 00:19:44.293 [2024-12-16 22:15:50.389994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.390134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.390146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.293 [2024-12-16 22:15:50.390155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:44.293 [2024-12-16 22:15:50.390170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.390201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.390211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.293 [2024-12-16 22:15:50.390219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:44.293 [2024-12-16 22:15:50.390226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.390249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:44.293 [2024-12-16 22:15:50.392271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.392312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.293 [2024-12-16 22:15:50.392323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:19:44.293 [2024-12-16 22:15:50.392336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.392388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.293 [2024-12-16 22:15:50.392400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.293 [2024-12-16 22:15:50.392411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:44.293 [2024-12-16 22:15:50.392423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.293 [2024-12-16 22:15:50.392441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:44.293 [2024-12-16 22:15:50.392464] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:44.293 [2024-12-16 22:15:50.392499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:44.293 [2024-12-16 22:15:50.392517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:44.293 [2024-12-16 22:15:50.392626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.293 [2024-12-16 22:15:50.392636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.293 [2024-12-16 22:15:50.392647] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.293 [2024-12-16 22:15:50.392658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.293 [2024-12-16 22:15:50.392667] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.293 [2024-12-16 22:15:50.392676] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:44.293 [2024-12-16 22:15:50.392683] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.294 [2024-12-16 22:15:50.392691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.294 [2024-12-16 22:15:50.392702] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.294 [2024-12-16 22:15:50.392713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.392721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.294 [2024-12-16 22:15:50.392729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:19:44.294 [2024-12-16 22:15:50.392736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.392827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.392858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.294 [2024-12-16 22:15:50.392867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:44.294 [2024-12-16 22:15:50.392874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.392973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.294 [2024-12-16 22:15:50.392996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.294 [2024-12-16 22:15:50.393005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.294 [2024-12-16 22:15:50.393035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.294 [2024-12-16 22:15:50.393063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.294 [2024-12-16 22:15:50.393078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.294 [2024-12-16 22:15:50.393087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:44.294 [2024-12-16 22:15:50.393095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.294 [2024-12-16 22:15:50.393103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.294 [2024-12-16 22:15:50.393111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:44.294 [2024-12-16 22:15:50.393124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.294 [2024-12-16 22:15:50.393140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.294 [2024-12-16 22:15:50.393164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.294 [2024-12-16 22:15:50.393192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.294 [2024-12-16 22:15:50.393216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.294 [2024-12-16 22:15:50.393238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.294 [2024-12-16 22:15:50.393260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.294 [2024-12-16 22:15:50.393275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.294 [2024-12-16 22:15:50.393282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:44.294 [2024-12-16 22:15:50.393289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.294 [2024-12-16 22:15:50.393297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.294 [2024-12-16 22:15:50.393305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:44.294 [2024-12-16 22:15:50.393314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.294 [2024-12-16 22:15:50.393329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:44.294 [2024-12-16 22:15:50.393336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.294 [2024-12-16 22:15:50.393353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.294 [2024-12-16 22:15:50.393363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.294 [2024-12-16 22:15:50.393383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.294 [2024-12-16 22:15:50.393391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.294 [2024-12-16 22:15:50.393399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.294 [2024-12-16 22:15:50.393407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.294 [2024-12-16 22:15:50.393415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.294 [2024-12-16 22:15:50.393422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.294 [2024-12-16 22:15:50.393432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.294 [2024-12-16 22:15:50.393442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:44.294 [2024-12-16 22:15:50.393463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:44.294 [2024-12-16 22:15:50.393471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:44.294 [2024-12-16 22:15:50.393478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:44.294 [2024-12-16 22:15:50.393485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:44.294 [2024-12-16 22:15:50.393492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:44.294 [2024-12-16 22:15:50.393499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:44.294 [2024-12-16 22:15:50.393506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:44.294 [2024-12-16 22:15:50.393514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:44.294 [2024-12-16 22:15:50.393521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:44.294 [2024-12-16 22:15:50.393556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.294 [2024-12-16 22:15:50.393568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.294 [2024-12-16 22:15:50.393586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.294 [2024-12-16 22:15:50.393593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.294 [2024-12-16 22:15:50.393601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.294 [2024-12-16 22:15:50.393609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.393618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.294 [2024-12-16 22:15:50.393626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:19:44.294 [2024-12-16 22:15:50.393634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.407445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.407493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.294 [2024-12-16 22:15:50.407506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.758 ms 00:19:44.294 [2024-12-16 22:15:50.407515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.407651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.407670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.294 [2024-12-16 22:15:50.407680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:44.294 [2024-12-16 22:15:50.407690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.428455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.294 [2024-12-16 22:15:50.428517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.294 [2024-12-16 22:15:50.428530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.739 ms 00:19:44.294 [2024-12-16 22:15:50.428539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.294 [2024-12-16 22:15:50.428643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.428656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.295 [2024-12-16 22:15:50.428665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.295 [2024-12-16 22:15:50.428674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.429253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.429293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.295 [2024-12-16 22:15:50.429305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:19:44.295 [2024-12-16 22:15:50.429322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.429481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.429500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.295 [2024-12-16 22:15:50.429510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:44.295 [2024-12-16 22:15:50.429519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.438104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.438162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.295 [2024-12-16 22:15:50.438172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.559 ms 00:19:44.295 [2024-12-16 22:15:50.438183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.442065] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:44.295 [2024-12-16 22:15:50.442118] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:44.295 [2024-12-16 22:15:50.442131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.442139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:44.295 [2024-12-16 22:15:50.442148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.838 ms 00:19:44.295 [2024-12-16 22:15:50.442155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.458086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.458139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:44.295 [2024-12-16 22:15:50.458150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.864 ms 00:19:44.295 [2024-12-16 22:15:50.458159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.461005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.461046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:44.295 [2024-12-16 22:15:50.461057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:19:44.295 [2024-12-16 22:15:50.461064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.463588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.463645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:44.295 [2024-12-16 22:15:50.463655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.472 ms 00:19:44.295 [2024-12-16 22:15:50.463663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.464030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.464059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.295 [2024-12-16 22:15:50.464069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:44.295 [2024-12-16 22:15:50.464076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.488316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.488375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:44.295 [2024-12-16 22:15:50.488388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.215 ms 00:19:44.295 [2024-12-16 22:15:50.488396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.496730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:44.295 [2024-12-16 22:15:50.516984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.517036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.295 [2024-12-16 22:15:50.517049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.491 ms 00:19:44.295 [2024-12-16 22:15:50.517066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.517159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.517170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:44.295 [2024-12-16 22:15:50.517183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:44.295 [2024-12-16 22:15:50.517191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.517248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.517258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.295 [2024-12-16 22:15:50.517267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:44.295 [2024-12-16 22:15:50.517275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.517303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.517313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.295 [2024-12-16 22:15:50.517325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:44.295 [2024-12-16 22:15:50.517335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.517373] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:44.295 [2024-12-16 22:15:50.517383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.517390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:44.295 [2024-12-16 22:15:50.517403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:44.295 [2024-12-16 22:15:50.517417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.523637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.523689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.295 [2024-12-16 22:15:50.523709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.200 ms 00:19:44.295 [2024-12-16 22:15:50.523721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.523812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.295 [2024-12-16 22:15:50.523824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.295 [2024-12-16 22:15:50.523833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:44.295 [2024-12-16 22:15:50.523860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.295 [2024-12-16 22:15:50.525303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.295 [2024-12-16 22:15:50.526731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.129 ms, result 0 00:19:44.295 [2024-12-16 22:15:50.528027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.295 [2024-12-16 22:15:50.535378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.261  [2024-12-16T22:15:52.586Z] Copying: 19/256 [MB] (19 MBps) [2024-12-16T22:15:53.973Z] Copying: 33/256 [MB] (14 MBps) [2024-12-16T22:15:54.547Z] Copying: 51/256 [MB] (17 MBps) [2024-12-16T22:15:55.934Z] Copying: 61/256 [MB] (10 MBps) [2024-12-16T22:15:56.878Z] Copying: 72/256 [MB] (10 MBps) [2024-12-16T22:15:57.820Z] Copying: 88/256 [MB] (16 MBps) [2024-12-16T22:15:58.764Z] Copying: 102/256 [MB] (14 MBps) [2024-12-16T22:15:59.708Z] Copying: 124/256 [MB] (21 MBps) [2024-12-16T22:16:00.651Z] Copying: 139/256 [MB] (14 MBps) [2024-12-16T22:16:01.595Z] Copying: 157/256 [MB] (18 MBps) [2024-12-16T22:16:02.537Z] Copying: 173/256 [MB] (16 MBps) [2024-12-16T22:16:03.926Z] Copying: 193/256 [MB] (19 MBps) [2024-12-16T22:16:04.869Z] Copying: 203/256 [MB] (10 MBps) [2024-12-16T22:16:05.812Z] Copying: 216/256 [MB] (13 MBps) [2024-12-16T22:16:06.758Z] Copying: 237/256 [MB] (20 MBps) [2024-12-16T22:16:06.758Z] Copying: 254/256 [MB] (17 MBps) [2024-12-16T22:16:06.758Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-16 22:16:06.611569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.411 [2024-12-16 22:16:06.613512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.613564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.411 [2024-12-16 22:16:06.613579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.411 [2024-12-16 22:16:06.613588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.613610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:00.411 [2024-12-16 22:16:06.614354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.614397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.411 [2024-12-16 22:16:06.614409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:20:00.411 [2024-12-16 22:16:06.614419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.614689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.614701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.411 [2024-12-16 22:16:06.614714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:00.411 [2024-12-16 22:16:06.614724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.618438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.618463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.411 [2024-12-16 22:16:06.618473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:20:00.411 [2024-12-16 22:16:06.618481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.625346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.625406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:00.411 [2024-12-16 22:16:06.625417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.847 ms 00:20:00.411 [2024-12-16 22:16:06.625428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.628425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.628476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.411 [2024-12-16 22:16:06.628486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.936 ms 00:20:00.411 [2024-12-16 22:16:06.628493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.633626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.633685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.411 [2024-12-16 22:16:06.633697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.086 ms 00:20:00.411 [2024-12-16 22:16:06.633705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.633886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.633899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.411 [2024-12-16 22:16:06.633912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:00.411 [2024-12-16 22:16:06.633928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.636422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.636469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:00.411 [2024-12-16 22:16:06.636479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.475 ms 00:20:00.411 [2024-12-16 22:16:06.636486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.638568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.638613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:00.411 [2024-12-16 22:16:06.638622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:20:00.411 [2024-12-16 22:16:06.638629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.640303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.640353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.411 [2024-12-16 22:16:06.640363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:20:00.411 [2024-12-16 22:16:06.640370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.642052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.411 [2024-12-16 22:16:06.642097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.411 [2024-12-16 22:16:06.642107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:20:00.411 [2024-12-16 22:16:06.642113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.411 [2024-12-16 22:16:06.642154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.411 [2024-12-16 22:16:06.642168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.411 [2024-12-16 22:16:06.642486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.412 [2024-12-16 22:16:06.642968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.412 [2024-12-16 22:16:06.642977] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:20:00.412 [2024-12-16 22:16:06.642985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.412 [2024-12-16 22:16:06.642998] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.412 [2024-12-16 22:16:06.643013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.412 [2024-12-16 22:16:06.643021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.412 [2024-12-16 22:16:06.643029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.412 [2024-12-16 22:16:06.643041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.412 [2024-12-16 22:16:06.643049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.412 [2024-12-16 22:16:06.643056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.412 [2024-12-16 22:16:06.643063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.412 [2024-12-16 22:16:06.643070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.412 [2024-12-16 22:16:06.643078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.412 [2024-12-16 22:16:06.643088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:20:00.412 [2024-12-16 22:16:06.643095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.645360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.412 [2024-12-16 22:16:06.645399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.412 [2024-12-16 22:16:06.645409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:20:00.412 [2024-12-16 22:16:06.645423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.645535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.412 [2024-12-16 22:16:06.645549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.412 [2024-12-16 22:16:06.645558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:00.412 [2024-12-16 22:16:06.645565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.653319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.412 [2024-12-16 22:16:06.653372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.412 [2024-12-16 22:16:06.653383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.412 [2024-12-16 22:16:06.653395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.653464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.412 [2024-12-16 22:16:06.653476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.412 [2024-12-16 22:16:06.653485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.412 [2024-12-16 22:16:06.653492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.653549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.412 [2024-12-16 22:16:06.653559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.412 [2024-12-16 22:16:06.653567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.412 [2024-12-16 22:16:06.653575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.653599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.412 [2024-12-16 22:16:06.653610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.412 [2024-12-16 22:16:06.653618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.412 [2024-12-16 22:16:06.653629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.412 [2024-12-16 22:16:06.666764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.412 [2024-12-16 22:16:06.666815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.413 [2024-12-16 22:16:06.666830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.666856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.676712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.676761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.413 [2024-12-16 22:16:06.676772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.676780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.676904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.676915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.413 [2024-12-16 22:16:06.676925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.676933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.676983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.676997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.413 [2024-12-16 22:16:06.677005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.677013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.677085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.677094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.413 [2024-12-16 22:16:06.677103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.677115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.677147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.677159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.413 [2024-12-16 22:16:06.677167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.677175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.677217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.677227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.413 [2024-12-16 22:16:06.677236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.677244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.677295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.413 [2024-12-16 22:16:06.677314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.413 [2024-12-16 22:16:06.677324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.413 [2024-12-16 22:16:06.677337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.413 [2024-12-16 22:16:06.677490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.966 ms, result 0 00:20:00.674 00:20:00.674 00:20:00.674 22:16:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:00.674 22:16:06 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:01.246 22:16:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.246 [2024-12-16 22:16:07.521450] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:01.246 [2024-12-16 22:16:07.521600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89852 ] 00:20:01.507 [2024-12-16 22:16:07.680425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.507 [2024-12-16 22:16:07.701587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.507 [2024-12-16 22:16:07.802865] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.507 [2024-12-16 22:16:07.802948] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.769 [2024-12-16 22:16:07.963531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.963589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.769 [2024-12-16 22:16:07.963603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.769 [2024-12-16 22:16:07.963612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.966207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.966259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.769 [2024-12-16 22:16:07.966271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:20:01.769 [2024-12-16 22:16:07.966279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.966397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.769 [2024-12-16 22:16:07.966668] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.769 [2024-12-16 22:16:07.966694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.966705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.769 [2024-12-16 22:16:07.966715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:01.769 [2024-12-16 22:16:07.966726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.968793] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:01.769 [2024-12-16 22:16:07.972251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.972309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:01.769 [2024-12-16 22:16:07.972327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.461 ms 00:20:01.769 [2024-12-16 22:16:07.972336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.972443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.972455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:01.769 [2024-12-16 22:16:07.972465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:01.769 [2024-12-16 22:16:07.972478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.980677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.769 [2024-12-16 22:16:07.980725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.769 [2024-12-16 22:16:07.980736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.150 ms 00:20:01.769 [2024-12-16 22:16:07.980744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.769 [2024-12-16 22:16:07.980904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.980917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.770 [2024-12-16 22:16:07.980926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:01.770 [2024-12-16 22:16:07.980937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.980965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.980973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.770 [2024-12-16 22:16:07.980982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:01.770 [2024-12-16 22:16:07.980989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.981012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.770 [2024-12-16 22:16:07.983059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.983096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.770 [2024-12-16 22:16:07.983107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:20:01.770 [2024-12-16 22:16:07.983118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.983162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.983174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.770 [2024-12-16 22:16:07.983182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:01.770 [2024-12-16 22:16:07.983195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.983213] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.770 [2024-12-16 22:16:07.983234] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:01.770 [2024-12-16 22:16:07.983270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.770 [2024-12-16 22:16:07.983291] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:01.770 [2024-12-16 22:16:07.983396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.770 [2024-12-16 22:16:07.983407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.770 [2024-12-16 22:16:07.983417] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:01.770 [2024-12-16 22:16:07.983427] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983445] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.770 [2024-12-16 22:16:07.983452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.770 [2024-12-16 22:16:07.983459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.770 [2024-12-16 22:16:07.983469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.770 [2024-12-16 22:16:07.983479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.983488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.770 [2024-12-16 22:16:07.983496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:01.770 [2024-12-16 22:16:07.983503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.983595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.770 [2024-12-16 22:16:07.983605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.770 [2024-12-16 22:16:07.983613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:01.770 [2024-12-16 22:16:07.983620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.770 [2024-12-16 22:16:07.983719] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.770 [2024-12-16 22:16:07.983739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.770 [2024-12-16 22:16:07.983749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.770 [2024-12-16 22:16:07.983775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.770 [2024-12-16 22:16:07.983802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.770 [2024-12-16 22:16:07.983818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.770 [2024-12-16 22:16:07.983826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.770 [2024-12-16 22:16:07.983849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.770 [2024-12-16 22:16:07.983858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.770 [2024-12-16 22:16:07.983866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:01.770 [2024-12-16 22:16:07.983877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.770 [2024-12-16 22:16:07.983893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.770 [2024-12-16 22:16:07.983917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.770 [2024-12-16 22:16:07.983949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.770 [2024-12-16 22:16:07.983973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.770 [2024-12-16 22:16:07.983980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.770 [2024-12-16 22:16:07.983988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.770 [2024-12-16 22:16:07.983996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:01.770 [2024-12-16 22:16:07.984004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.770 [2024-12-16 22:16:07.984011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.770 [2024-12-16 22:16:07.984019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:01.770 [2024-12-16 22:16:07.984027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.770 [2024-12-16 22:16:07.984035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.770 [2024-12-16 22:16:07.984043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:01.770 [2024-12-16 22:16:07.984050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.770 [2024-12-16 22:16:07.984058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.770 [2024-12-16 22:16:07.984066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:01.770 [2024-12-16 22:16:07.984076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.984083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.770 [2024-12-16 22:16:07.984091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:01.770 [2024-12-16 22:16:07.984098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.984105] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.770 [2024-12-16 22:16:07.984115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.770 [2024-12-16 22:16:07.984123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.770 [2024-12-16 22:16:07.984131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.770 [2024-12-16 22:16:07.984141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.770 [2024-12-16 22:16:07.984150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.770 [2024-12-16 22:16:07.984157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.770 [2024-12-16 22:16:07.984166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.770 [2024-12-16 22:16:07.984173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.770 [2024-12-16 22:16:07.984181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.770 [2024-12-16 22:16:07.984190] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.770 [2024-12-16 22:16:07.984201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.770 [2024-12-16 22:16:07.984211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.770 [2024-12-16 22:16:07.984219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:01.770 [2024-12-16 22:16:07.984226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.770 [2024-12-16 22:16:07.984233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:01.770 [2024-12-16 22:16:07.984240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:01.770 [2024-12-16 22:16:07.984247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:01.770 [2024-12-16 22:16:07.984254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:01.771 [2024-12-16 22:16:07.984261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:01.771 [2024-12-16 22:16:07.984268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:01.771 [2024-12-16 22:16:07.984275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:01.771 [2024-12-16 22:16:07.984310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.771 [2024-12-16 22:16:07.984320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.771 [2024-12-16 22:16:07.984338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.771 [2024-12-16 22:16:07.984345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.771 [2024-12-16 22:16:07.984352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.771 [2024-12-16 22:16:07.984359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:07.984367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.771 [2024-12-16 22:16:07.984377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:20:01.771 [2024-12-16 22:16:07.984384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:07.997465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:07.997506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.771 [2024-12-16 22:16:07.997517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.028 ms 00:20:01.771 [2024-12-16 22:16:07.997525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:07.997655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:07.997673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.771 [2024-12-16 22:16:07.997682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:01.771 [2024-12-16 22:16:07.997690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.017018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.017071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.771 [2024-12-16 22:16:08.017085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.304 ms 00:20:01.771 [2024-12-16 22:16:08.017103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.017197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.017210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.771 [2024-12-16 22:16:08.017218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.771 [2024-12-16 22:16:08.017230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.017707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.017741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.771 [2024-12-16 22:16:08.017782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:01.771 [2024-12-16 22:16:08.017791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.017960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.017975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.771 [2024-12-16 22:16:08.017984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:20:01.771 [2024-12-16 22:16:08.017993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.025964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.026012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.771 [2024-12-16 22:16:08.026023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:20:01.771 [2024-12-16 22:16:08.026043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.029995] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:01.771 [2024-12-16 22:16:08.030049] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.771 [2024-12-16 22:16:08.030062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.030070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.771 [2024-12-16 22:16:08.030079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:20:01.771 [2024-12-16 22:16:08.030086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.045663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.045715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.771 [2024-12-16 22:16:08.045737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.497 ms 00:20:01.771 [2024-12-16 22:16:08.045756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.048502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.048545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.771 [2024-12-16 22:16:08.048555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:20:01.771 [2024-12-16 22:16:08.048562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.051194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.051252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.771 [2024-12-16 22:16:08.051263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:20:01.771 [2024-12-16 22:16:08.051269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.051617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.051634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.771 [2024-12-16 22:16:08.051644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:01.771 [2024-12-16 22:16:08.051651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.075075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.075127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.771 [2024-12-16 22:16:08.075141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.398 ms 00:20:01.771 [2024-12-16 22:16:08.075149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.083198] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.771 [2024-12-16 22:16:08.102750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.102804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.771 [2024-12-16 22:16:08.102818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.507 ms 00:20:01.771 [2024-12-16 22:16:08.102826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.102939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.102956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.771 [2024-12-16 22:16:08.102970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:01.771 [2024-12-16 22:16:08.102981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.103039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.103049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.771 [2024-12-16 22:16:08.103058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:01.771 [2024-12-16 22:16:08.103066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.103097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.103107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.771 [2024-12-16 22:16:08.103119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:01.771 [2024-12-16 22:16:08.103129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.103167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.771 [2024-12-16 22:16:08.103179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.103186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.771 [2024-12-16 22:16:08.103194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:01.771 [2024-12-16 22:16:08.103201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.109143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.109193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.771 [2024-12-16 22:16:08.109213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.915 ms 00:20:01.771 [2024-12-16 22:16:08.109225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.109315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.771 [2024-12-16 22:16:08.109326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.771 [2024-12-16 22:16:08.109335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:01.771 [2024-12-16 22:16:08.109344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.771 [2024-12-16 22:16:08.110398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.771 [2024-12-16 22:16:08.111709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.559 ms, result 0 00:20:01.772 [2024-12-16 22:16:08.112886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.033 [2024-12-16 22:16:08.120361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.033  [2024-12-16T22:16:08.380Z] Copying: 4096/4096 [kB] (average 16 MBps)[2024-12-16 22:16:08.359472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.033 [2024-12-16 22:16:08.360598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.360646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.033 [2024-12-16 22:16:08.360659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.033 [2024-12-16 22:16:08.360667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.033 [2024-12-16 22:16:08.360689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:02.033 [2024-12-16 22:16:08.361375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.361421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.033 [2024-12-16 22:16:08.361436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:20:02.033 [2024-12-16 22:16:08.361445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.033 [2024-12-16 22:16:08.363673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.363720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.033 [2024-12-16 22:16:08.363735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:20:02.033 [2024-12-16 22:16:08.363742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.033 [2024-12-16 22:16:08.368071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.368106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.033 [2024-12-16 22:16:08.368116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:20:02.033 [2024-12-16 22:16:08.368124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.033 [2024-12-16 22:16:08.375014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.375072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.033 [2024-12-16 22:16:08.375085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:20:02.033 [2024-12-16 22:16:08.375093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.033 [2024-12-16 22:16:08.377994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.033 [2024-12-16 22:16:08.378043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.033 [2024-12-16 22:16:08.378054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:20:02.033 [2024-12-16 22:16:08.378061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.382832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.382897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.296 [2024-12-16 22:16:08.382908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.727 ms 00:20:02.296 [2024-12-16 22:16:08.382916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.383053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.383070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.296 [2024-12-16 22:16:08.383081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:02.296 [2024-12-16 22:16:08.383089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.385552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.385600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.296 [2024-12-16 22:16:08.385610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:20:02.296 [2024-12-16 22:16:08.385617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.387692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.387739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.296 [2024-12-16 22:16:08.387750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:20:02.296 [2024-12-16 22:16:08.387756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.389158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.389201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.296 [2024-12-16 22:16:08.389212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:20:02.296 [2024-12-16 22:16:08.389221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.390939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.296 [2024-12-16 22:16:08.390987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.296 [2024-12-16 22:16:08.390997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:20:02.296 [2024-12-16 22:16:08.391004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.296 [2024-12-16 22:16:08.391086] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.296 [2024-12-16 22:16:08.391103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.296 [2024-12-16 22:16:08.391519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.297 [2024-12-16 22:16:08.391874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.297 [2024-12-16 22:16:08.391882] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:20:02.297 [2024-12-16 22:16:08.391891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.297 [2024-12-16 22:16:08.391899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.297 [2024-12-16 22:16:08.391906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.297 [2024-12-16 22:16:08.391914] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.297 [2024-12-16 22:16:08.391921] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.297 [2024-12-16 22:16:08.391932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.297 [2024-12-16 22:16:08.391940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.297 [2024-12-16 22:16:08.391947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.297 [2024-12-16 22:16:08.391953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.297 [2024-12-16 22:16:08.391960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.297 [2024-12-16 22:16:08.391967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.297 [2024-12-16 22:16:08.391981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:20:02.297 [2024-12-16 22:16:08.391989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.394085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.297 [2024-12-16 22:16:08.394122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.297 [2024-12-16 22:16:08.394133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:20:02.297 [2024-12-16 22:16:08.394148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.394268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.297 [2024-12-16 22:16:08.394278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.297 [2024-12-16 22:16:08.394287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:02.297 [2024-12-16 22:16:08.394296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.401831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.401912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.297 [2024-12-16 22:16:08.401933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.401941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.402023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.402032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.297 [2024-12-16 22:16:08.402041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.402049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.402102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.402112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.297 [2024-12-16 22:16:08.402120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.402131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.402148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.402156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.297 [2024-12-16 22:16:08.402164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.402171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.415362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.415412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.297 [2024-12-16 22:16:08.415424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.415435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.425250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.425302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.297 [2024-12-16 22:16:08.425313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.425322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.425353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.425362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.297 [2024-12-16 22:16:08.425370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.425386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.425421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.425431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.297 [2024-12-16 22:16:08.425439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.425450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.425533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.425544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.297 [2024-12-16 22:16:08.425552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.297 [2024-12-16 22:16:08.425559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.297 [2024-12-16 22:16:08.425589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.297 [2024-12-16 22:16:08.425602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:02.298 [2024-12-16 22:16:08.425610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.298 [2024-12-16 22:16:08.425618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.298 [2024-12-16 22:16:08.425662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.298 [2024-12-16 22:16:08.425672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.298 [2024-12-16 22:16:08.425684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.298 [2024-12-16 22:16:08.425692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.298 [2024-12-16 22:16:08.425740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.298 [2024-12-16 22:16:08.425774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.298 [2024-12-16 22:16:08.425783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.298 [2024-12-16 22:16:08.425791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.298 [2024-12-16 22:16:08.426000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.372 ms, result 0 00:20:02.298 00:20:02.298 00:20:02.559 22:16:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89866 00:20:02.559 22:16:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89866 00:20:02.559 22:16:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89866 ']' 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.559 22:16:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:02.559 [2024-12-16 22:16:08.725817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:02.559 [2024-12-16 22:16:08.725983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89866 ] 00:20:02.559 [2024-12-16 22:16:08.884338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.820 [2024-12-16 22:16:08.912851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.392 22:16:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.392 22:16:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:03.392 22:16:09 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:03.654 [2024-12-16 22:16:09.787392] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.654 [2024-12-16 22:16:09.787479] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.654 [2024-12-16 22:16:09.963307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.963372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:03.654 [2024-12-16 22:16:09.963386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:03.654 [2024-12-16 22:16:09.963397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.965911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.965960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.654 [2024-12-16 22:16:09.965970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:20:03.654 [2024-12-16 22:16:09.965980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.966077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:03.654 [2024-12-16 22:16:09.966337] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:03.654 [2024-12-16 22:16:09.966353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.966366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.654 [2024-12-16 22:16:09.966376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:03.654 [2024-12-16 22:16:09.966386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.968228] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:03.654 [2024-12-16 22:16:09.971583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.971636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:03.654 [2024-12-16 22:16:09.971654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:20:03.654 [2024-12-16 22:16:09.971662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.971762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.971773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:03.654 [2024-12-16 22:16:09.971787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:03.654 [2024-12-16 22:16:09.971794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.979741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.979782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.654 [2024-12-16 22:16:09.979795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.869 ms 00:20:03.654 [2024-12-16 22:16:09.979806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.979947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.979961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.654 [2024-12-16 22:16:09.979973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:03.654 [2024-12-16 22:16:09.979984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.980012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.980023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:03.654 [2024-12-16 22:16:09.980033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:03.654 [2024-12-16 22:16:09.980043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.980069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:03.654 [2024-12-16 22:16:09.982090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.982130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.654 [2024-12-16 22:16:09.982147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:20:03.654 [2024-12-16 22:16:09.982157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.982200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.654 [2024-12-16 22:16:09.982211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:03.654 [2024-12-16 22:16:09.982219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:03.654 [2024-12-16 22:16:09.982229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.654 [2024-12-16 22:16:09.982249] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:03.654 [2024-12-16 22:16:09.982273] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:03.654 [2024-12-16 22:16:09.982315] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:03.654 [2024-12-16 22:16:09.982337] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:03.654 [2024-12-16 22:16:09.982443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:03.654 [2024-12-16 22:16:09.982462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:03.655 [2024-12-16 22:16:09.982473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:03.655 [2024-12-16 22:16:09.982485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:03.655 [2024-12-16 22:16:09.982495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:03.655 [2024-12-16 22:16:09.982507] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:03.655 [2024-12-16 22:16:09.982515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:03.655 [2024-12-16 22:16:09.982525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:03.655 [2024-12-16 22:16:09.982535] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:03.655 [2024-12-16 22:16:09.982544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.655 [2024-12-16 22:16:09.982552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:03.655 [2024-12-16 22:16:09.982561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:03.655 [2024-12-16 22:16:09.982569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.655 [2024-12-16 22:16:09.982659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.655 [2024-12-16 22:16:09.982667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:03.655 [2024-12-16 22:16:09.982677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:03.655 [2024-12-16 22:16:09.982684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.655 [2024-12-16 22:16:09.982789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:03.655 [2024-12-16 22:16:09.982802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:03.655 [2024-12-16 22:16:09.982815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.655 [2024-12-16 22:16:09.982824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.982865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:03.655 [2024-12-16 22:16:09.982881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.982891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:03.655 [2024-12-16 22:16:09.982899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:03.655 [2024-12-16 22:16:09.982909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:03.655 [2024-12-16 22:16:09.982916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.655 [2024-12-16 22:16:09.982927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:03.655 [2024-12-16 22:16:09.982935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:03.655 [2024-12-16 22:16:09.982945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.655 [2024-12-16 22:16:09.982952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:03.655 [2024-12-16 22:16:09.982963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:03.655 [2024-12-16 22:16:09.982970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.982980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:03.655 [2024-12-16 22:16:09.982988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:03.655 [2024-12-16 22:16:09.982998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:03.655 [2024-12-16 22:16:09.983021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:03.655 [2024-12-16 22:16:09.983047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:03.655 [2024-12-16 22:16:09.983075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:03.655 [2024-12-16 22:16:09.983099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:03.655 [2024-12-16 22:16:09.983128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.655 [2024-12-16 22:16:09.983146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:03.655 [2024-12-16 22:16:09.983154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:03.655 [2024-12-16 22:16:09.983166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.655 [2024-12-16 22:16:09.983174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:03.655 [2024-12-16 22:16:09.983183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:03.655 [2024-12-16 22:16:09.983189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:03.655 [2024-12-16 22:16:09.983205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:03.655 [2024-12-16 22:16:09.983214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983220] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:03.655 [2024-12-16 22:16:09.983233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:03.655 [2024-12-16 22:16:09.983240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.655 [2024-12-16 22:16:09.983258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:03.655 [2024-12-16 22:16:09.983267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:03.655 [2024-12-16 22:16:09.983274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:03.655 [2024-12-16 22:16:09.983282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:03.655 [2024-12-16 22:16:09.983289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:03.655 [2024-12-16 22:16:09.983303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:03.655 [2024-12-16 22:16:09.983313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:03.655 [2024-12-16 22:16:09.983326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:03.655 [2024-12-16 22:16:09.983347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:03.655 [2024-12-16 22:16:09.983354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:03.655 [2024-12-16 22:16:09.983362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:03.655 [2024-12-16 22:16:09.983370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:03.655 [2024-12-16 22:16:09.983379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:03.655 [2024-12-16 22:16:09.983386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:03.655 [2024-12-16 22:16:09.983395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:03.655 [2024-12-16 22:16:09.983402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:03.655 [2024-12-16 22:16:09.983412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:03.655 [2024-12-16 22:16:09.983455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:03.655 [2024-12-16 22:16:09.983468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:03.655 [2024-12-16 22:16:09.983487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:03.655 [2024-12-16 22:16:09.983494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:03.655 [2024-12-16 22:16:09.983504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:03.655 [2024-12-16 22:16:09.983511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.655 [2024-12-16 22:16:09.983522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:03.655 [2024-12-16 22:16:09.983529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:20:03.655 [2024-12-16 22:16:09.983540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.655 [2024-12-16 22:16:09.997124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.655 [2024-12-16 22:16:09.997171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.655 [2024-12-16 22:16:09.997184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.526 ms 00:20:03.655 [2024-12-16 22:16:09.997195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.655 [2024-12-16 22:16:09.997323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.655 [2024-12-16 22:16:09.997338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:03.655 [2024-12-16 22:16:09.997347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:03.655 [2024-12-16 22:16:09.997357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.916 [2024-12-16 22:16:10.010564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.916 [2024-12-16 22:16:10.010616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.916 [2024-12-16 22:16:10.010627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.186 ms 00:20:03.916 [2024-12-16 22:16:10.010639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.916 [2024-12-16 22:16:10.010707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.916 [2024-12-16 22:16:10.010720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.916 [2024-12-16 22:16:10.010728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:03.916 [2024-12-16 22:16:10.010739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.916 [2024-12-16 22:16:10.011305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.916 [2024-12-16 22:16:10.011340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.916 [2024-12-16 22:16:10.011353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:03.916 [2024-12-16 22:16:10.011364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.916 [2024-12-16 22:16:10.011517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.916 [2024-12-16 22:16:10.011533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.916 [2024-12-16 22:16:10.011543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:03.916 [2024-12-16 22:16:10.011554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.916 [2024-12-16 22:16:10.019993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.916 [2024-12-16 22:16:10.020048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.916 [2024-12-16 22:16:10.020062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.415 ms 00:20:03.917 [2024-12-16 22:16:10.020072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.036692] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:03.917 [2024-12-16 22:16:10.036755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:03.917 [2024-12-16 22:16:10.036770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.036781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:03.917 [2024-12-16 22:16:10.036792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.598 ms 00:20:03.917 [2024-12-16 22:16:10.036802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.056264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.056321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:03.917 [2024-12-16 22:16:10.056333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.382 ms 00:20:03.917 [2024-12-16 22:16:10.056346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.059424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.059477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:03.917 [2024-12-16 22:16:10.059488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:20:03.917 [2024-12-16 22:16:10.059497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.062294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.062351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:03.917 [2024-12-16 22:16:10.062362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.746 ms 00:20:03.917 [2024-12-16 22:16:10.062371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.062726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.062750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:03.917 [2024-12-16 22:16:10.062761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:03.917 [2024-12-16 22:16:10.062771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.087151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.087219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:03.917 [2024-12-16 22:16:10.087233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.353 ms 00:20:03.917 [2024-12-16 22:16:10.087246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.095302] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:03.917 [2024-12-16 22:16:10.114217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.114267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:03.917 [2024-12-16 22:16:10.114283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.873 ms 00:20:03.917 [2024-12-16 22:16:10.114291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.114381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.114395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:03.917 [2024-12-16 22:16:10.114406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:03.917 [2024-12-16 22:16:10.114415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.114475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.114485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:03.917 [2024-12-16 22:16:10.114501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:03.917 [2024-12-16 22:16:10.114509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.114536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.114545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:03.917 [2024-12-16 22:16:10.114563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:03.917 [2024-12-16 22:16:10.114571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.114613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:03.917 [2024-12-16 22:16:10.114622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.114632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:03.917 [2024-12-16 22:16:10.114640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:03.917 [2024-12-16 22:16:10.114649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.120457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.120526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:03.917 [2024-12-16 22:16:10.120538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:20:03.917 [2024-12-16 22:16:10.120551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.120643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.917 [2024-12-16 22:16:10.120655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:03.917 [2024-12-16 22:16:10.120666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:03.917 [2024-12-16 22:16:10.120676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.917 [2024-12-16 22:16:10.121782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.917 [2024-12-16 22:16:10.123182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.155 ms, result 0 00:20:03.917 [2024-12-16 22:16:10.125320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.917 Some configs were skipped because the RPC state that can call them passed over. 00:20:03.917 22:16:10 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:04.178 [2024-12-16 22:16:10.363239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.178 [2024-12-16 22:16:10.363295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:04.178 [2024-12-16 22:16:10.363312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:20:04.178 [2024-12-16 22:16:10.363323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.178 [2024-12-16 22:16:10.363362] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.538 ms, result 0 00:20:04.178 true 00:20:04.178 22:16:10 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:04.440 [2024-12-16 22:16:10.570636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.440 [2024-12-16 22:16:10.570698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:04.440 [2024-12-16 22:16:10.570711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:20:04.440 [2024-12-16 22:16:10.570721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.440 [2024-12-16 22:16:10.570759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.686 ms, result 0 00:20:04.440 true 00:20:04.440 22:16:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89866 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89866 ']' 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89866 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89866 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89866' 00:20:04.440 killing process with pid 89866 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89866 00:20:04.440 22:16:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89866 00:20:04.440 [2024-12-16 22:16:10.741246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.440 [2024-12-16 22:16:10.741309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.440 [2024-12-16 22:16:10.741324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:04.440 [2024-12-16 22:16:10.741337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.741362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:04.441 [2024-12-16 22:16:10.741965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.741997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.441 [2024-12-16 22:16:10.742009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:20:04.441 [2024-12-16 22:16:10.742019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.742309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.742329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.441 [2024-12-16 22:16:10.742339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:04.441 [2024-12-16 22:16:10.742349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.746869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.746907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.441 [2024-12-16 22:16:10.746917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.500 ms 00:20:04.441 [2024-12-16 22:16:10.746931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.753790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.753827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.441 [2024-12-16 22:16:10.753844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.824 ms 00:20:04.441 [2024-12-16 22:16:10.753856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.756095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.756135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.441 [2024-12-16 22:16:10.756144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:20:04.441 [2024-12-16 22:16:10.756153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.760911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.760953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.441 [2024-12-16 22:16:10.760965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:20:04.441 [2024-12-16 22:16:10.760974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.761105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.761118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.441 [2024-12-16 22:16:10.761127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:04.441 [2024-12-16 22:16:10.761136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.764135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.764182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:04.441 [2024-12-16 22:16:10.764192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:20:04.441 [2024-12-16 22:16:10.764206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.766542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.766584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:04.441 [2024-12-16 22:16:10.766593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:20:04.441 [2024-12-16 22:16:10.766602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.767874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.767912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:04.441 [2024-12-16 22:16:10.767921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:20:04.441 [2024-12-16 22:16:10.767930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.769281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.441 [2024-12-16 22:16:10.769322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.441 [2024-12-16 22:16:10.769332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:20:04.441 [2024-12-16 22:16:10.769342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.441 [2024-12-16 22:16:10.769376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.441 [2024-12-16 22:16:10.769394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.441 [2024-12-16 22:16:10.769907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.769989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.442 [2024-12-16 22:16:10.770300] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.442 [2024-12-16 22:16:10.770308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:20:04.442 [2024-12-16 22:16:10.770320] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.442 [2024-12-16 22:16:10.770328] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.442 [2024-12-16 22:16:10.770337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.442 [2024-12-16 22:16:10.770345] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.442 [2024-12-16 22:16:10.770354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.442 [2024-12-16 22:16:10.770365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.442 [2024-12-16 22:16:10.770375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.442 [2024-12-16 22:16:10.770381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.442 [2024-12-16 22:16:10.770388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.442 [2024-12-16 22:16:10.770397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.442 [2024-12-16 22:16:10.770405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.442 [2024-12-16 22:16:10.770414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:20:04.442 [2024-12-16 22:16:10.770425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.772079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.442 [2024-12-16 22:16:10.772111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.442 [2024-12-16 22:16:10.772120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:20:04.442 [2024-12-16 22:16:10.772129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.772218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.442 [2024-12-16 22:16:10.772228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.442 [2024-12-16 22:16:10.772237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:04.442 [2024-12-16 22:16:10.772246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.778307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.442 [2024-12-16 22:16:10.778346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.442 [2024-12-16 22:16:10.778356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.442 [2024-12-16 22:16:10.778364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.778443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.442 [2024-12-16 22:16:10.778455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.442 [2024-12-16 22:16:10.778463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.442 [2024-12-16 22:16:10.778475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.778518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.442 [2024-12-16 22:16:10.778530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.442 [2024-12-16 22:16:10.778538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.442 [2024-12-16 22:16:10.778547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.442 [2024-12-16 22:16:10.778564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.442 [2024-12-16 22:16:10.778574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.442 [2024-12-16 22:16:10.778585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.442 [2024-12-16 22:16:10.778594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.789294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.789343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.704 [2024-12-16 22:16:10.789353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.789368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.797720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.797777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.704 [2024-12-16 22:16:10.797787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.797800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.797911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.797925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.704 [2024-12-16 22:16:10.797935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.797945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.797976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.797986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.704 [2024-12-16 22:16:10.797994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.798004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.798074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.798087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.704 [2024-12-16 22:16:10.798096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.798106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.798137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.798150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.704 [2024-12-16 22:16:10.798158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.798169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.798211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.798228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.704 [2024-12-16 22:16:10.798240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.798249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.798296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.704 [2024-12-16 22:16:10.798309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.704 [2024-12-16 22:16:10.798317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.704 [2024-12-16 22:16:10.798327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.704 [2024-12-16 22:16:10.798464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.195 ms, result 0 00:20:04.704 22:16:10 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:04.965 [2024-12-16 22:16:11.069632] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:04.965 [2024-12-16 22:16:11.069799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89902 ] 00:20:04.965 [2024-12-16 22:16:11.231694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.965 [2024-12-16 22:16:11.259991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.227 [2024-12-16 22:16:11.375797] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.227 [2024-12-16 22:16:11.375911] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.227 [2024-12-16 22:16:11.534376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.534440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:05.227 [2024-12-16 22:16:11.534460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:05.227 [2024-12-16 22:16:11.534476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.537039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.537086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.227 [2024-12-16 22:16:11.537100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:20:05.227 [2024-12-16 22:16:11.537109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.537209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:05.227 [2024-12-16 22:16:11.537498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:05.227 [2024-12-16 22:16:11.537526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.537535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.227 [2024-12-16 22:16:11.537548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:05.227 [2024-12-16 22:16:11.537556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.539389] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:05.227 [2024-12-16 22:16:11.543159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.543211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:05.227 [2024-12-16 22:16:11.543228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:20:05.227 [2024-12-16 22:16:11.543237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.543320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.543331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:05.227 [2024-12-16 22:16:11.543340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:05.227 [2024-12-16 22:16:11.543347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.551643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.551690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.227 [2024-12-16 22:16:11.551701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.230 ms 00:20:05.227 [2024-12-16 22:16:11.551709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.551881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.551894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.227 [2024-12-16 22:16:11.551904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:05.227 [2024-12-16 22:16:11.551916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.551946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.551955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:05.227 [2024-12-16 22:16:11.551963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:05.227 [2024-12-16 22:16:11.551970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.551997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:05.227 [2024-12-16 22:16:11.554049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.554092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.227 [2024-12-16 22:16:11.554102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:20:05.227 [2024-12-16 22:16:11.554115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.554163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.554175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:05.227 [2024-12-16 22:16:11.554183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:05.227 [2024-12-16 22:16:11.554190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.554214] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:05.227 [2024-12-16 22:16:11.554237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:05.227 [2024-12-16 22:16:11.554274] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:05.227 [2024-12-16 22:16:11.554296] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:05.227 [2024-12-16 22:16:11.554405] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:05.227 [2024-12-16 22:16:11.554416] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:05.227 [2024-12-16 22:16:11.554431] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:05.227 [2024-12-16 22:16:11.554442] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554460] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:05.227 [2024-12-16 22:16:11.554467] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:05.227 [2024-12-16 22:16:11.554475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:05.227 [2024-12-16 22:16:11.554485] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:05.227 [2024-12-16 22:16:11.554496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.554504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:05.227 [2024-12-16 22:16:11.554511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:05.227 [2024-12-16 22:16:11.554519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.554608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-12-16 22:16:11.554617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:05.227 [2024-12-16 22:16:11.554625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:05.227 [2024-12-16 22:16:11.554633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-12-16 22:16:11.554736] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:05.227 [2024-12-16 22:16:11.554762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:05.227 [2024-12-16 22:16:11.554772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:05.227 [2024-12-16 22:16:11.554798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:05.227 [2024-12-16 22:16:11.554828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.227 [2024-12-16 22:16:11.554860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:05.227 [2024-12-16 22:16:11.554869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:05.227 [2024-12-16 22:16:11.554876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.227 [2024-12-16 22:16:11.554886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:05.227 [2024-12-16 22:16:11.554894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:05.227 [2024-12-16 22:16:11.554902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:05.227 [2024-12-16 22:16:11.554920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:05.227 [2024-12-16 22:16:11.554945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:05.227 [2024-12-16 22:16:11.554973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:05.227 [2024-12-16 22:16:11.554981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.227 [2024-12-16 22:16:11.554989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:05.227 [2024-12-16 22:16:11.554997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:05.227 [2024-12-16 22:16:11.555008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.227 [2024-12-16 22:16:11.555015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:05.227 [2024-12-16 22:16:11.555024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:05.227 [2024-12-16 22:16:11.555032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.227 [2024-12-16 22:16:11.555040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:05.227 [2024-12-16 22:16:11.555048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:05.227 [2024-12-16 22:16:11.555055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.227 [2024-12-16 22:16:11.555063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:05.227 [2024-12-16 22:16:11.555071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:05.228 [2024-12-16 22:16:11.555078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.228 [2024-12-16 22:16:11.555086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:05.228 [2024-12-16 22:16:11.555094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:05.228 [2024-12-16 22:16:11.555104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.228 [2024-12-16 22:16:11.555112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:05.228 [2024-12-16 22:16:11.555119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:05.228 [2024-12-16 22:16:11.555127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.228 [2024-12-16 22:16:11.555135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:05.228 [2024-12-16 22:16:11.555144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:05.228 [2024-12-16 22:16:11.555154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.228 [2024-12-16 22:16:11.555163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.228 [2024-12-16 22:16:11.555172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:05.228 [2024-12-16 22:16:11.555180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:05.228 [2024-12-16 22:16:11.555187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:05.228 [2024-12-16 22:16:11.555195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:05.228 [2024-12-16 22:16:11.555203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:05.228 [2024-12-16 22:16:11.555210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:05.228 [2024-12-16 22:16:11.555219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:05.228 [2024-12-16 22:16:11.555229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:05.228 [2024-12-16 22:16:11.555247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:05.228 [2024-12-16 22:16:11.555254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:05.228 [2024-12-16 22:16:11.555262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:05.228 [2024-12-16 22:16:11.555269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:05.228 [2024-12-16 22:16:11.555276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:05.228 [2024-12-16 22:16:11.555283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:05.228 [2024-12-16 22:16:11.555290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:05.228 [2024-12-16 22:16:11.555297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:05.228 [2024-12-16 22:16:11.555304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:05.228 [2024-12-16 22:16:11.555341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:05.228 [2024-12-16 22:16:11.555351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:05.228 [2024-12-16 22:16:11.555373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:05.228 [2024-12-16 22:16:11.555381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:05.228 [2024-12-16 22:16:11.555388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:05.228 [2024-12-16 22:16:11.555396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.228 [2024-12-16 22:16:11.555404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:05.228 [2024-12-16 22:16:11.555412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:20:05.228 [2024-12-16 22:16:11.555420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.228 [2024-12-16 22:16:11.569817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.228 [2024-12-16 22:16:11.569880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.228 [2024-12-16 22:16:11.569901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 00:20:05.228 [2024-12-16 22:16:11.569909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.228 [2024-12-16 22:16:11.570043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.228 [2024-12-16 22:16:11.570062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:05.228 [2024-12-16 22:16:11.570071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:05.228 [2024-12-16 22:16:11.570079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.490 [2024-12-16 22:16:11.597348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.490 [2024-12-16 22:16:11.597429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.491 [2024-12-16 22:16:11.597459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.238 ms 00:20:05.491 [2024-12-16 22:16:11.597474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.597631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.597654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.491 [2024-12-16 22:16:11.597672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:05.491 [2024-12-16 22:16:11.597686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.598346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.598398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.491 [2024-12-16 22:16:11.598409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:20:05.491 [2024-12-16 22:16:11.598418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.598575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.598602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.491 [2024-12-16 22:16:11.598610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:05.491 [2024-12-16 22:16:11.598624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.607150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.607205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.491 [2024-12-16 22:16:11.607216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.503 ms 00:20:05.491 [2024-12-16 22:16:11.607227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.611272] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:05.491 [2024-12-16 22:16:11.611324] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:05.491 [2024-12-16 22:16:11.611336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.611344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:05.491 [2024-12-16 22:16:11.611353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.003 ms 00:20:05.491 [2024-12-16 22:16:11.611360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.627371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.627420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:05.491 [2024-12-16 22:16:11.627431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.928 ms 00:20:05.491 [2024-12-16 22:16:11.627447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.630420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.630466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:05.491 [2024-12-16 22:16:11.630476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.850 ms 00:20:05.491 [2024-12-16 22:16:11.630483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.632985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.633038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:05.491 [2024-12-16 22:16:11.633048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:20:05.491 [2024-12-16 22:16:11.633054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.633411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.633445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.491 [2024-12-16 22:16:11.633460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:05.491 [2024-12-16 22:16:11.633468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.657331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.657401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:05.491 [2024-12-16 22:16:11.657414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.838 ms 00:20:05.491 [2024-12-16 22:16:11.657424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.665588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:05.491 [2024-12-16 22:16:11.684235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.684285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:05.491 [2024-12-16 22:16:11.684298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.718 ms 00:20:05.491 [2024-12-16 22:16:11.684306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.684400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.684412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:05.491 [2024-12-16 22:16:11.684425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:05.491 [2024-12-16 22:16:11.684433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.684499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.684510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.491 [2024-12-16 22:16:11.684519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:05.491 [2024-12-16 22:16:11.684531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.684558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.684567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.491 [2024-12-16 22:16:11.684576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:05.491 [2024-12-16 22:16:11.684586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.684626] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:05.491 [2024-12-16 22:16:11.684636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.684645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:05.491 [2024-12-16 22:16:11.684657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:05.491 [2024-12-16 22:16:11.684666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.690543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.690595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.491 [2024-12-16 22:16:11.690606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.857 ms 00:20:05.491 [2024-12-16 22:16:11.690622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.690716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.491 [2024-12-16 22:16:11.690727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.491 [2024-12-16 22:16:11.690737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:05.491 [2024-12-16 22:16:11.690745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.491 [2024-12-16 22:16:11.691809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:05.491 [2024-12-16 22:16:11.693260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.110 ms, result 0 00:20:05.491 [2024-12-16 22:16:11.694631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.491 [2024-12-16 22:16:11.701968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.434  [2024-12-16T22:16:14.166Z] Copying: 18/256 [MB] (18 MBps) [2024-12-16T22:16:15.110Z] Copying: 33/256 [MB] (14 MBps) [2024-12-16T22:16:16.061Z] Copying: 45/256 [MB] (12 MBps) [2024-12-16T22:16:17.033Z] Copying: 59/256 [MB] (14 MBps) [2024-12-16T22:16:17.978Z] Copying: 72/256 [MB] (12 MBps) [2024-12-16T22:16:18.921Z] Copying: 88/256 [MB] (15 MBps) [2024-12-16T22:16:19.865Z] Copying: 108/256 [MB] (20 MBps) [2024-12-16T22:16:20.809Z] Copying: 122/256 [MB] (13 MBps) [2024-12-16T22:16:22.196Z] Copying: 137/256 [MB] (14 MBps) [2024-12-16T22:16:22.770Z] Copying: 156/256 [MB] (19 MBps) [2024-12-16T22:16:24.158Z] Copying: 169/256 [MB] (12 MBps) [2024-12-16T22:16:25.103Z] Copying: 188/256 [MB] (19 MBps) [2024-12-16T22:16:26.045Z] Copying: 208/256 [MB] (19 MBps) [2024-12-16T22:16:26.987Z] Copying: 219/256 [MB] (11 MBps) [2024-12-16T22:16:27.931Z] Copying: 240/256 [MB] (20 MBps) [2024-12-16T22:16:28.193Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-16 22:16:27.947892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.846 [2024-12-16 22:16:27.950515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.950582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.846 [2024-12-16 22:16:27.950602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:21.846 [2024-12-16 22:16:27.950614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.950648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.846 [2024-12-16 22:16:27.951420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.951462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.846 [2024-12-16 22:16:27.951489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:20:21.846 [2024-12-16 22:16:27.951503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.951920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.951945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.846 [2024-12-16 22:16:27.951962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:20:21.846 [2024-12-16 22:16:27.951973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.957322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.957354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.846 [2024-12-16 22:16:27.957366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.326 ms 00:20:21.846 [2024-12-16 22:16:27.957377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.965534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.965593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.846 [2024-12-16 22:16:27.965605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.130 ms 00:20:21.846 [2024-12-16 22:16:27.965617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.968590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.968647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.846 [2024-12-16 22:16:27.968659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:20:21.846 [2024-12-16 22:16:27.968666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.973608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.973672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.846 [2024-12-16 22:16:27.973684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.886 ms 00:20:21.846 [2024-12-16 22:16:27.973693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.973875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.973888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.846 [2024-12-16 22:16:27.973901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:21.846 [2024-12-16 22:16:27.973909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.976714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.976775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.846 [2024-12-16 22:16:27.976786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:20:21.846 [2024-12-16 22:16:27.976794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.979114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.979172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.846 [2024-12-16 22:16:27.979181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:20:21.846 [2024-12-16 22:16:27.979189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.981170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.981225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.846 [2024-12-16 22:16:27.981237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:20:21.846 [2024-12-16 22:16:27.981245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.983158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.846 [2024-12-16 22:16:27.983232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.846 [2024-12-16 22:16:27.983244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:20:21.846 [2024-12-16 22:16:27.983253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.846 [2024-12-16 22:16:27.983336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.846 [2024-12-16 22:16:27.983368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.846 [2024-12-16 22:16:27.983684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.983998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.847 [2024-12-16 22:16:27.984221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.847 [2024-12-16 22:16:27.984230] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14ef5026-cac1-4684-8b7f-e1ccdf91ad2b 00:20:21.847 [2024-12-16 22:16:27.984238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.847 [2024-12-16 22:16:27.984246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.847 [2024-12-16 22:16:27.984255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.847 [2024-12-16 22:16:27.984264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.847 [2024-12-16 22:16:27.984272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.847 [2024-12-16 22:16:27.984284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.847 [2024-12-16 22:16:27.984292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.847 [2024-12-16 22:16:27.984298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.847 [2024-12-16 22:16:27.984305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.847 [2024-12-16 22:16:27.984313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.847 [2024-12-16 22:16:27.984321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.847 [2024-12-16 22:16:27.984330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:20:21.847 [2024-12-16 22:16:27.984338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.986778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.847 [2024-12-16 22:16:27.986819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.847 [2024-12-16 22:16:27.986829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:20:21.847 [2024-12-16 22:16:27.986864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.987012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.847 [2024-12-16 22:16:27.987023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.847 [2024-12-16 22:16:27.987033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:21.847 [2024-12-16 22:16:27.987040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.995182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.847 [2024-12-16 22:16:27.995245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.847 [2024-12-16 22:16:27.995256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.847 [2024-12-16 22:16:27.995269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.995335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.847 [2024-12-16 22:16:27.995344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.847 [2024-12-16 22:16:27.995357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.847 [2024-12-16 22:16:27.995364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.995411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.847 [2024-12-16 22:16:27.995421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.847 [2024-12-16 22:16:27.995429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.847 [2024-12-16 22:16:27.995437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:27.995457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.847 [2024-12-16 22:16:27.995465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.847 [2024-12-16 22:16:27.995472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.847 [2024-12-16 22:16:27.995480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.847 [2024-12-16 22:16:28.008676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.008735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.848 [2024-12-16 22:16:28.008747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.008762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.018718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.018772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.848 [2024-12-16 22:16:28.018783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.018792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.018857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.018867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.848 [2024-12-16 22:16:28.018876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.018884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.018921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.018931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.848 [2024-12-16 22:16:28.018939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.018947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.019018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.019029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.848 [2024-12-16 22:16:28.019037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.019045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.019080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.019098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.848 [2024-12-16 22:16:28.019106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.019115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.019154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.019168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.848 [2024-12-16 22:16:28.019177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.019185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.019230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.848 [2024-12-16 22:16:28.019244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.848 [2024-12-16 22:16:28.019252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.848 [2024-12-16 22:16:28.019260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.848 [2024-12-16 22:16:28.019400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.879 ms, result 0 00:20:22.110 00:20:22.110 00:20:22.110 22:16:28 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:22.683 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:22.683 22:16:28 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89866 00:20:22.683 22:16:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89866 ']' 00:20:22.683 Process with pid 89866 is not found 00:20:22.683 22:16:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89866 00:20:22.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89866) - No such process 00:20:22.683 22:16:28 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89866 is not found' 00:20:22.683 00:20:22.683 real 1m9.105s 00:20:22.683 user 1m23.623s 00:20:22.683 sys 0m13.813s 00:20:22.683 22:16:28 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.683 22:16:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:22.683 ************************************ 00:20:22.683 END TEST ftl_trim 00:20:22.683 ************************************ 00:20:22.683 22:16:28 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:22.683 22:16:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:22.683 22:16:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.683 22:16:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:22.683 ************************************ 00:20:22.683 START TEST ftl_restore 00:20:22.683 ************************************ 00:20:22.683 22:16:28 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:22.683 * Looking for test storage... 00:20:22.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:22.683 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:22.683 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:22.683 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.945 22:16:29 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.945 --rc genhtml_branch_coverage=1 00:20:22.945 --rc genhtml_function_coverage=1 00:20:22.945 --rc genhtml_legend=1 00:20:22.945 --rc geninfo_all_blocks=1 00:20:22.945 --rc geninfo_unexecuted_blocks=1 00:20:22.945 00:20:22.945 ' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.945 --rc genhtml_branch_coverage=1 00:20:22.945 --rc genhtml_function_coverage=1 00:20:22.945 --rc genhtml_legend=1 00:20:22.945 --rc geninfo_all_blocks=1 00:20:22.945 --rc geninfo_unexecuted_blocks=1 00:20:22.945 00:20:22.945 ' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.945 --rc genhtml_branch_coverage=1 00:20:22.945 --rc genhtml_function_coverage=1 00:20:22.945 --rc genhtml_legend=1 00:20:22.945 --rc geninfo_all_blocks=1 00:20:22.945 --rc geninfo_unexecuted_blocks=1 00:20:22.945 00:20:22.945 ' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:22.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.945 --rc genhtml_branch_coverage=1 00:20:22.945 --rc genhtml_function_coverage=1 00:20:22.945 --rc genhtml_legend=1 00:20:22.945 --rc geninfo_all_blocks=1 00:20:22.945 --rc geninfo_unexecuted_blocks=1 00:20:22.945 00:20:22.945 ' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JcAb5ORHnw 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90161 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90161 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 90161 ']' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.945 22:16:29 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:22.945 22:16:29 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:22.945 [2024-12-16 22:16:29.197334] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:22.945 [2024-12-16 22:16:29.197480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90161 ] 00:20:23.207 [2024-12-16 22:16:29.357826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.207 [2024-12-16 22:16:29.386161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.780 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.780 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:23.780 22:16:30 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:24.041 22:16:30 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:24.041 22:16:30 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:24.041 22:16:30 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:24.041 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:24.041 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:24.041 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:24.041 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:24.041 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:24.302 { 00:20:24.302 "name": "nvme0n1", 00:20:24.302 "aliases": [ 00:20:24.302 "ccd1f66d-5c2d-4305-8baa-b25ff41b7d51" 00:20:24.302 ], 00:20:24.302 "product_name": "NVMe disk", 00:20:24.302 "block_size": 4096, 00:20:24.302 "num_blocks": 1310720, 00:20:24.302 "uuid": "ccd1f66d-5c2d-4305-8baa-b25ff41b7d51", 00:20:24.302 "numa_id": -1, 00:20:24.302 "assigned_rate_limits": { 00:20:24.302 "rw_ios_per_sec": 0, 00:20:24.302 "rw_mbytes_per_sec": 0, 00:20:24.302 "r_mbytes_per_sec": 0, 00:20:24.302 "w_mbytes_per_sec": 0 00:20:24.302 }, 00:20:24.302 "claimed": true, 00:20:24.302 "claim_type": "read_many_write_one", 00:20:24.302 "zoned": false, 00:20:24.302 "supported_io_types": { 00:20:24.302 "read": true, 00:20:24.302 "write": true, 00:20:24.302 "unmap": true, 00:20:24.302 "flush": true, 00:20:24.302 "reset": true, 00:20:24.302 "nvme_admin": true, 00:20:24.302 "nvme_io": true, 00:20:24.302 "nvme_io_md": false, 00:20:24.302 "write_zeroes": true, 00:20:24.302 "zcopy": false, 00:20:24.302 "get_zone_info": false, 00:20:24.302 "zone_management": false, 00:20:24.302 "zone_append": false, 00:20:24.302 "compare": true, 00:20:24.302 "compare_and_write": false, 00:20:24.302 "abort": true, 00:20:24.302 "seek_hole": false, 00:20:24.302 "seek_data": false, 00:20:24.302 "copy": true, 00:20:24.302 "nvme_iov_md": false 00:20:24.302 }, 00:20:24.302 "driver_specific": { 00:20:24.302 "nvme": [ 00:20:24.302 { 00:20:24.302 "pci_address": "0000:00:11.0", 00:20:24.302 "trid": { 00:20:24.302 "trtype": "PCIe", 00:20:24.302 "traddr": "0000:00:11.0" 00:20:24.302 }, 00:20:24.302 "ctrlr_data": { 00:20:24.302 "cntlid": 0, 00:20:24.302 "vendor_id": "0x1b36", 00:20:24.302 "model_number": "QEMU NVMe Ctrl", 00:20:24.302 "serial_number": "12341", 00:20:24.302 "firmware_revision": "8.0.0", 00:20:24.302 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:24.302 "oacs": { 00:20:24.302 "security": 0, 00:20:24.302 "format": 1, 00:20:24.302 "firmware": 0, 00:20:24.302 "ns_manage": 1 00:20:24.302 }, 00:20:24.302 "multi_ctrlr": false, 00:20:24.302 "ana_reporting": false 00:20:24.302 }, 00:20:24.302 "vs": { 00:20:24.302 "nvme_version": "1.4" 00:20:24.302 }, 00:20:24.302 "ns_data": { 00:20:24.302 "id": 1, 00:20:24.302 "can_share": false 00:20:24.302 } 00:20:24.302 } 00:20:24.302 ], 00:20:24.302 "mp_policy": "active_passive" 00:20:24.302 } 00:20:24.302 } 00:20:24.302 ]' 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:24.302 22:16:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:24.302 22:16:30 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:24.302 22:16:30 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:24.302 22:16:30 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:24.303 22:16:30 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:24.303 22:16:30 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:24.564 22:16:30 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=dfc22ef2-20ad-40b8-ba21-cc123d07c506 00:20:24.564 22:16:30 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:24.564 22:16:30 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dfc22ef2-20ad-40b8-ba21-cc123d07c506 00:20:24.825 22:16:31 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:25.086 22:16:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3be2cb50-46ca-495b-b6ab-577c42500e90 00:20:25.086 22:16:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3be2cb50-46ca-495b-b6ab-577c42500e90 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:25.348 22:16:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.348 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.348 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:25.348 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:25.348 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:25.348 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.608 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:25.609 { 00:20:25.609 "name": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:25.609 "aliases": [ 00:20:25.609 "lvs/nvme0n1p0" 00:20:25.609 ], 00:20:25.609 "product_name": "Logical Volume", 00:20:25.609 "block_size": 4096, 00:20:25.609 "num_blocks": 26476544, 00:20:25.609 "uuid": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:25.609 "assigned_rate_limits": { 00:20:25.609 "rw_ios_per_sec": 0, 00:20:25.609 "rw_mbytes_per_sec": 0, 00:20:25.609 "r_mbytes_per_sec": 0, 00:20:25.609 "w_mbytes_per_sec": 0 00:20:25.609 }, 00:20:25.609 "claimed": false, 00:20:25.609 "zoned": false, 00:20:25.609 "supported_io_types": { 00:20:25.609 "read": true, 00:20:25.609 "write": true, 00:20:25.609 "unmap": true, 00:20:25.609 "flush": false, 00:20:25.609 "reset": true, 00:20:25.609 "nvme_admin": false, 00:20:25.609 "nvme_io": false, 00:20:25.609 "nvme_io_md": false, 00:20:25.609 "write_zeroes": true, 00:20:25.609 "zcopy": false, 00:20:25.609 "get_zone_info": false, 00:20:25.609 "zone_management": false, 00:20:25.609 "zone_append": false, 00:20:25.609 "compare": false, 00:20:25.609 "compare_and_write": false, 00:20:25.609 "abort": false, 00:20:25.609 "seek_hole": true, 00:20:25.609 "seek_data": true, 00:20:25.609 "copy": false, 00:20:25.609 "nvme_iov_md": false 00:20:25.609 }, 00:20:25.609 "driver_specific": { 00:20:25.609 "lvol": { 00:20:25.609 "lvol_store_uuid": "3be2cb50-46ca-495b-b6ab-577c42500e90", 00:20:25.609 "base_bdev": "nvme0n1", 00:20:25.609 "thin_provision": true, 00:20:25.609 "num_allocated_clusters": 0, 00:20:25.609 "snapshot": false, 00:20:25.609 "clone": false, 00:20:25.609 "esnap_clone": false 00:20:25.609 } 00:20:25.609 } 00:20:25.609 } 00:20:25.609 ]' 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:25.609 22:16:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:25.609 22:16:31 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:25.609 22:16:31 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:25.609 22:16:31 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:25.869 22:16:32 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:25.869 22:16:32 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:25.869 22:16:32 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.869 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:25.869 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:25.869 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:25.869 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:25.869 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:26.131 { 00:20:26.131 "name": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:26.131 "aliases": [ 00:20:26.131 "lvs/nvme0n1p0" 00:20:26.131 ], 00:20:26.131 "product_name": "Logical Volume", 00:20:26.131 "block_size": 4096, 00:20:26.131 "num_blocks": 26476544, 00:20:26.131 "uuid": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:26.131 "assigned_rate_limits": { 00:20:26.131 "rw_ios_per_sec": 0, 00:20:26.131 "rw_mbytes_per_sec": 0, 00:20:26.131 "r_mbytes_per_sec": 0, 00:20:26.131 "w_mbytes_per_sec": 0 00:20:26.131 }, 00:20:26.131 "claimed": false, 00:20:26.131 "zoned": false, 00:20:26.131 "supported_io_types": { 00:20:26.131 "read": true, 00:20:26.131 "write": true, 00:20:26.131 "unmap": true, 00:20:26.131 "flush": false, 00:20:26.131 "reset": true, 00:20:26.131 "nvme_admin": false, 00:20:26.131 "nvme_io": false, 00:20:26.131 "nvme_io_md": false, 00:20:26.131 "write_zeroes": true, 00:20:26.131 "zcopy": false, 00:20:26.131 "get_zone_info": false, 00:20:26.131 "zone_management": false, 00:20:26.131 "zone_append": false, 00:20:26.131 "compare": false, 00:20:26.131 "compare_and_write": false, 00:20:26.131 "abort": false, 00:20:26.131 "seek_hole": true, 00:20:26.131 "seek_data": true, 00:20:26.131 "copy": false, 00:20:26.131 "nvme_iov_md": false 00:20:26.131 }, 00:20:26.131 "driver_specific": { 00:20:26.131 "lvol": { 00:20:26.131 "lvol_store_uuid": "3be2cb50-46ca-495b-b6ab-577c42500e90", 00:20:26.131 "base_bdev": "nvme0n1", 00:20:26.131 "thin_provision": true, 00:20:26.131 "num_allocated_clusters": 0, 00:20:26.131 "snapshot": false, 00:20:26.131 "clone": false, 00:20:26.131 "esnap_clone": false 00:20:26.131 } 00:20:26.131 } 00:20:26.131 } 00:20:26.131 ]' 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:26.131 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:26.131 22:16:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:26.131 22:16:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:26.393 22:16:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:26.393 22:16:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:26.393 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:26.393 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:26.393 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:26.393 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:26.393 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:26.655 { 00:20:26.655 "name": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:26.655 "aliases": [ 00:20:26.655 "lvs/nvme0n1p0" 00:20:26.655 ], 00:20:26.655 "product_name": "Logical Volume", 00:20:26.655 "block_size": 4096, 00:20:26.655 "num_blocks": 26476544, 00:20:26.655 "uuid": "1b3d8e02-42b0-41ef-a8cc-6a8a2012f191", 00:20:26.655 "assigned_rate_limits": { 00:20:26.655 "rw_ios_per_sec": 0, 00:20:26.655 "rw_mbytes_per_sec": 0, 00:20:26.655 "r_mbytes_per_sec": 0, 00:20:26.655 "w_mbytes_per_sec": 0 00:20:26.655 }, 00:20:26.655 "claimed": false, 00:20:26.655 "zoned": false, 00:20:26.655 "supported_io_types": { 00:20:26.655 "read": true, 00:20:26.655 "write": true, 00:20:26.655 "unmap": true, 00:20:26.655 "flush": false, 00:20:26.655 "reset": true, 00:20:26.655 "nvme_admin": false, 00:20:26.655 "nvme_io": false, 00:20:26.655 "nvme_io_md": false, 00:20:26.655 "write_zeroes": true, 00:20:26.655 "zcopy": false, 00:20:26.655 "get_zone_info": false, 00:20:26.655 "zone_management": false, 00:20:26.655 "zone_append": false, 00:20:26.655 "compare": false, 00:20:26.655 "compare_and_write": false, 00:20:26.655 "abort": false, 00:20:26.655 "seek_hole": true, 00:20:26.655 "seek_data": true, 00:20:26.655 "copy": false, 00:20:26.655 "nvme_iov_md": false 00:20:26.655 }, 00:20:26.655 "driver_specific": { 00:20:26.655 "lvol": { 00:20:26.655 "lvol_store_uuid": "3be2cb50-46ca-495b-b6ab-577c42500e90", 00:20:26.655 "base_bdev": "nvme0n1", 00:20:26.655 "thin_provision": true, 00:20:26.655 "num_allocated_clusters": 0, 00:20:26.655 "snapshot": false, 00:20:26.655 "clone": false, 00:20:26.655 "esnap_clone": false 00:20:26.655 } 00:20:26.655 } 00:20:26.655 } 00:20:26.655 ]' 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:26.655 22:16:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 --l2p_dram_limit 10' 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:26.655 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:26.655 22:16:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1b3d8e02-42b0-41ef-a8cc-6a8a2012f191 --l2p_dram_limit 10 -c nvc0n1p0 00:20:26.917 [2024-12-16 22:16:33.023154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.023201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:26.917 [2024-12-16 22:16:33.023214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:26.917 [2024-12-16 22:16:33.023223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.023281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.023293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.917 [2024-12-16 22:16:33.023303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:26.917 [2024-12-16 22:16:33.023315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.023337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:26.917 [2024-12-16 22:16:33.023603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:26.917 [2024-12-16 22:16:33.023625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.023635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.917 [2024-12-16 22:16:33.023644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:26.917 [2024-12-16 22:16:33.023653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.023717] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:20:26.917 [2024-12-16 22:16:33.024785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.024818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:26.917 [2024-12-16 22:16:33.024829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:26.917 [2024-12-16 22:16:33.024848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.030092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.030122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.917 [2024-12-16 22:16:33.030133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.202 ms 00:20:26.917 [2024-12-16 22:16:33.030141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.030216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.030225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.917 [2024-12-16 22:16:33.030235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:26.917 [2024-12-16 22:16:33.030242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.030286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.030296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:26.917 [2024-12-16 22:16:33.030305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.917 [2024-12-16 22:16:33.030312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.030334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:26.917 [2024-12-16 22:16:33.031803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.031834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.917 [2024-12-16 22:16:33.031861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:20:26.917 [2024-12-16 22:16:33.031871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.031905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.917 [2024-12-16 22:16:33.031915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:26.917 [2024-12-16 22:16:33.031923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:26.917 [2024-12-16 22:16:33.031934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.917 [2024-12-16 22:16:33.031950] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:26.917 [2024-12-16 22:16:33.032095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:26.917 [2024-12-16 22:16:33.032106] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:26.917 [2024-12-16 22:16:33.032119] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:26.917 [2024-12-16 22:16:33.032129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:26.917 [2024-12-16 22:16:33.032142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032150] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:26.918 [2024-12-16 22:16:33.032160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:26.918 [2024-12-16 22:16:33.032167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:26.918 [2024-12-16 22:16:33.032176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:26.918 [2024-12-16 22:16:33.032183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.918 [2024-12-16 22:16:33.032192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:26.918 [2024-12-16 22:16:33.032200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:20:26.918 [2024-12-16 22:16:33.032208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.918 [2024-12-16 22:16:33.032292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.918 [2024-12-16 22:16:33.032303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:26.918 [2024-12-16 22:16:33.032310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:26.918 [2024-12-16 22:16:33.032323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.918 [2024-12-16 22:16:33.032415] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:26.918 [2024-12-16 22:16:33.032431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:26.918 [2024-12-16 22:16:33.032439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:26.918 [2024-12-16 22:16:33.032467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:26.918 [2024-12-16 22:16:33.032492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.918 [2024-12-16 22:16:33.032509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:26.918 [2024-12-16 22:16:33.032518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:26.918 [2024-12-16 22:16:33.032526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.918 [2024-12-16 22:16:33.032537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:26.918 [2024-12-16 22:16:33.032545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:26.918 [2024-12-16 22:16:33.032554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:26.918 [2024-12-16 22:16:33.032571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:26.918 [2024-12-16 22:16:33.032596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:26.918 [2024-12-16 22:16:33.032622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:26.918 [2024-12-16 22:16:33.032645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:26.918 [2024-12-16 22:16:33.032674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:26.918 [2024-12-16 22:16:33.032698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.918 [2024-12-16 22:16:33.032715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:26.918 [2024-12-16 22:16:33.032724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:26.918 [2024-12-16 22:16:33.032731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.918 [2024-12-16 22:16:33.032741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:26.918 [2024-12-16 22:16:33.032748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:26.918 [2024-12-16 22:16:33.032757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:26.918 [2024-12-16 22:16:33.032773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:26.918 [2024-12-16 22:16:33.032780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:26.918 [2024-12-16 22:16:33.032797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:26.918 [2024-12-16 22:16:33.032809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.918 [2024-12-16 22:16:33.032826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:26.918 [2024-12-16 22:16:33.032834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:26.918 [2024-12-16 22:16:33.032854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:26.918 [2024-12-16 22:16:33.032862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:26.918 [2024-12-16 22:16:33.032871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:26.918 [2024-12-16 22:16:33.032879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:26.918 [2024-12-16 22:16:33.032890] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:26.918 [2024-12-16 22:16:33.032902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.032913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:26.918 [2024-12-16 22:16:33.032921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:26.918 [2024-12-16 22:16:33.032929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:26.918 [2024-12-16 22:16:33.032936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:26.918 [2024-12-16 22:16:33.032945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:26.918 [2024-12-16 22:16:33.032952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:26.918 [2024-12-16 22:16:33.032963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:26.918 [2024-12-16 22:16:33.032970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:26.918 [2024-12-16 22:16:33.032978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:26.918 [2024-12-16 22:16:33.032985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.032994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.033001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.033010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.033017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:26.918 [2024-12-16 22:16:33.033025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:26.918 [2024-12-16 22:16:33.033033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.033042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:26.918 [2024-12-16 22:16:33.033049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:26.918 [2024-12-16 22:16:33.033058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:26.918 [2024-12-16 22:16:33.033064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:26.918 [2024-12-16 22:16:33.033073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.918 [2024-12-16 22:16:33.033080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:26.918 [2024-12-16 22:16:33.033091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:20:26.918 [2024-12-16 22:16:33.033098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.918 [2024-12-16 22:16:33.033146] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:26.918 [2024-12-16 22:16:33.033156] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:31.134 [2024-12-16 22:16:36.768550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.768608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:31.134 [2024-12-16 22:16:36.768625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3735.386 ms 00:20:31.134 [2024-12-16 22:16:36.768633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.777389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.777428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.134 [2024-12-16 22:16:36.777441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.671 ms 00:20:31.134 [2024-12-16 22:16:36.777449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.777547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.777562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.134 [2024-12-16 22:16:36.777576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:31.134 [2024-12-16 22:16:36.777584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.786498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.786533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.134 [2024-12-16 22:16:36.786545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.872 ms 00:20:31.134 [2024-12-16 22:16:36.786555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.786585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.786593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.134 [2024-12-16 22:16:36.786603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.134 [2024-12-16 22:16:36.786610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.786981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.787004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.134 [2024-12-16 22:16:36.787015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:31.134 [2024-12-16 22:16:36.787022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.134 [2024-12-16 22:16:36.787128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.134 [2024-12-16 22:16:36.787138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.135 [2024-12-16 22:16:36.787149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:31.135 [2024-12-16 22:16:36.787158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.792730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.792762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.135 [2024-12-16 22:16:36.792773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.550 ms 00:20:31.135 [2024-12-16 22:16:36.792781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.811225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:31.135 [2024-12-16 22:16:36.814368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.814409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.135 [2024-12-16 22:16:36.814424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.508 ms 00:20:31.135 [2024-12-16 22:16:36.814436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.884410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.884467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:31.135 [2024-12-16 22:16:36.884482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.934 ms 00:20:31.135 [2024-12-16 22:16:36.884495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.884681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.884694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.135 [2024-12-16 22:16:36.884703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:31.135 [2024-12-16 22:16:36.884712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.889319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.889363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:31.135 [2024-12-16 22:16:36.889377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:20:31.135 [2024-12-16 22:16:36.889387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.893155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.893196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:31.135 [2024-12-16 22:16:36.893206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:20:31.135 [2024-12-16 22:16:36.893214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.893571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.893583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.135 [2024-12-16 22:16:36.893592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:31.135 [2024-12-16 22:16:36.893604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.925980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.926038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:31.135 [2024-12-16 22:16:36.926052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.357 ms 00:20:31.135 [2024-12-16 22:16:36.926062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.931532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.931578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:31.135 [2024-12-16 22:16:36.931588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.422 ms 00:20:31.135 [2024-12-16 22:16:36.931598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.936120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.936163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:31.135 [2024-12-16 22:16:36.936173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.484 ms 00:20:31.135 [2024-12-16 22:16:36.936182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.941174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.941221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.135 [2024-12-16 22:16:36.941231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.954 ms 00:20:31.135 [2024-12-16 22:16:36.941243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.941284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.941302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.135 [2024-12-16 22:16:36.941311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.135 [2024-12-16 22:16:36.941325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.941401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:36.941414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.135 [2024-12-16 22:16:36.941422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:31.135 [2024-12-16 22:16:36.941438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:36.942472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3918.914 ms, result 0 00:20:31.135 { 00:20:31.135 "name": "ftl0", 00:20:31.135 "uuid": "e1bd74c0-8120-4477-a5a8-3d7d2ecaf716" 00:20:31.135 } 00:20:31.135 22:16:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:31.135 22:16:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:31.135 22:16:37 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:31.135 22:16:37 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:31.135 [2024-12-16 22:16:37.390008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.390063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.135 [2024-12-16 22:16:37.390083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.135 [2024-12-16 22:16:37.390092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.390119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.135 [2024-12-16 22:16:37.390876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.390924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.135 [2024-12-16 22:16:37.390937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:20:31.135 [2024-12-16 22:16:37.390948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.391217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.391232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.135 [2024-12-16 22:16:37.391246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:31.135 [2024-12-16 22:16:37.391257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.394516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.394544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.135 [2024-12-16 22:16:37.394555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:20:31.135 [2024-12-16 22:16:37.394565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.400690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.400738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.135 [2024-12-16 22:16:37.400749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:20:31.135 [2024-12-16 22:16:37.400763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.403780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.403858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.135 [2024-12-16 22:16:37.403869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:20:31.135 [2024-12-16 22:16:37.403879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.410903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.410960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.135 [2024-12-16 22:16:37.410971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:20:31.135 [2024-12-16 22:16:37.410981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.411140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.411155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.135 [2024-12-16 22:16:37.411167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:31.135 [2024-12-16 22:16:37.411177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.414196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.414251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.135 [2024-12-16 22:16:37.414261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:20:31.135 [2024-12-16 22:16:37.414271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.417417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.417476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.135 [2024-12-16 22:16:37.417486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:20:31.135 [2024-12-16 22:16:37.417497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.419705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.419764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.135 [2024-12-16 22:16:37.419774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:20:31.135 [2024-12-16 22:16:37.419784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.422342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.135 [2024-12-16 22:16:37.422400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.135 [2024-12-16 22:16:37.422410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:20:31.135 [2024-12-16 22:16:37.422423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.135 [2024-12-16 22:16:37.422467] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.135 [2024-12-16 22:16:37.422486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.135 [2024-12-16 22:16:37.422722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.422999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.136 [2024-12-16 22:16:37.423460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.136 [2024-12-16 22:16:37.423469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:20:31.136 [2024-12-16 22:16:37.423479] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.136 [2024-12-16 22:16:37.423486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.136 [2024-12-16 22:16:37.423495] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.136 [2024-12-16 22:16:37.423503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.136 [2024-12-16 22:16:37.423512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.136 [2024-12-16 22:16:37.423523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.136 [2024-12-16 22:16:37.423533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.136 [2024-12-16 22:16:37.423539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.136 [2024-12-16 22:16:37.423547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.136 [2024-12-16 22:16:37.423554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.136 [2024-12-16 22:16:37.423564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.136 [2024-12-16 22:16:37.423573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:20:31.136 [2024-12-16 22:16:37.423582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.136 [2024-12-16 22:16:37.425915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.136 [2024-12-16 22:16:37.425951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.136 [2024-12-16 22:16:37.425962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.313 ms 00:20:31.136 [2024-12-16 22:16:37.425977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.136 [2024-12-16 22:16:37.426097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.136 [2024-12-16 22:16:37.426110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.137 [2024-12-16 22:16:37.426119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:31.137 [2024-12-16 22:16:37.426131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.434314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.434368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.137 [2024-12-16 22:16:37.434383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.434398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.434464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.434476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.137 [2024-12-16 22:16:37.434484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.434495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.434572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.434589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.137 [2024-12-16 22:16:37.434598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.434611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.434628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.434639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.137 [2024-12-16 22:16:37.434647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.434658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.448402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.448460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.137 [2024-12-16 22:16:37.448475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.448485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.458886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.458936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.137 [2024-12-16 22:16:37.458948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.458958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.137 [2024-12-16 22:16:37.459052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.137 [2024-12-16 22:16:37.459132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.137 [2024-12-16 22:16:37.459241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.137 [2024-12-16 22:16:37.459303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.137 [2024-12-16 22:16:37.459373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.137 [2024-12-16 22:16:37.459442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.137 [2024-12-16 22:16:37.459451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.137 [2024-12-16 22:16:37.459468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.137 [2024-12-16 22:16:37.459604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.562 ms, result 0 00:20:31.137 true 00:20:31.398 22:16:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90161 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90161 ']' 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90161 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90161 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.398 killing process with pid 90161 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90161' 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 90161 00:20:31.398 22:16:37 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 90161 00:20:35.606 22:16:41 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:39.886 262144+0 records in 00:20:39.886 262144+0 records out 00:20:39.886 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.87935 s, 277 MB/s 00:20:39.886 22:16:45 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:41.800 22:16:47 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:41.800 [2024-12-16 22:16:48.022399] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:41.800 [2024-12-16 22:16:48.022674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90375 ] 00:20:42.061 [2024-12-16 22:16:48.181659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.061 [2024-12-16 22:16:48.206299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.061 [2024-12-16 22:16:48.317635] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.061 [2024-12-16 22:16:48.317723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.324 [2024-12-16 22:16:48.478955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.479016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.324 [2024-12-16 22:16:48.479031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.324 [2024-12-16 22:16:48.479040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.479101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.479112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.324 [2024-12-16 22:16:48.479121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:42.324 [2024-12-16 22:16:48.479129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.479155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.324 [2024-12-16 22:16:48.479558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.324 [2024-12-16 22:16:48.479592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.479604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.324 [2024-12-16 22:16:48.479621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:20:42.324 [2024-12-16 22:16:48.479629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.481474] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.324 [2024-12-16 22:16:48.485021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.485075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.324 [2024-12-16 22:16:48.485092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:20:42.324 [2024-12-16 22:16:48.485104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.485297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.485332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.324 [2024-12-16 22:16:48.485342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:42.324 [2024-12-16 22:16:48.485351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.493712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.493760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.324 [2024-12-16 22:16:48.493776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.317 ms 00:20:42.324 [2024-12-16 22:16:48.493783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.493910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.493921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.324 [2024-12-16 22:16:48.493931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:20:42.324 [2024-12-16 22:16:48.493940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.494006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.494016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.324 [2024-12-16 22:16:48.494024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.324 [2024-12-16 22:16:48.494035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.494058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.324 [2024-12-16 22:16:48.496134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.496174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.324 [2024-12-16 22:16:48.496184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:20:42.324 [2024-12-16 22:16:48.496191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.496231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.324 [2024-12-16 22:16:48.496241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.324 [2024-12-16 22:16:48.496253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:42.324 [2024-12-16 22:16:48.496264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.324 [2024-12-16 22:16:48.496289] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.324 [2024-12-16 22:16:48.496311] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.324 [2024-12-16 22:16:48.496350] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.324 [2024-12-16 22:16:48.496377] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:42.324 [2024-12-16 22:16:48.496487] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.325 [2024-12-16 22:16:48.496497] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.325 [2024-12-16 22:16:48.496511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.325 [2024-12-16 22:16:48.496525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.325 [2024-12-16 22:16:48.496538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.325 [2024-12-16 22:16:48.496546] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.325 [2024-12-16 22:16:48.496554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.325 [2024-12-16 22:16:48.496563] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.325 [2024-12-16 22:16:48.496571] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.325 [2024-12-16 22:16:48.496579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-12-16 22:16:48.496586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.325 [2024-12-16 22:16:48.496594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:42.325 [2024-12-16 22:16:48.496603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.325 [2024-12-16 22:16:48.496691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-12-16 22:16:48.496701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.325 [2024-12-16 22:16:48.496712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:42.325 [2024-12-16 22:16:48.496719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.325 [2024-12-16 22:16:48.496820] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.325 [2024-12-16 22:16:48.496832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.325 [2024-12-16 22:16:48.496859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.325 [2024-12-16 22:16:48.496874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.496883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.325 [2024-12-16 22:16:48.496892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.496900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.325 [2024-12-16 22:16:48.496908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.325 [2024-12-16 22:16:48.496916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.325 [2024-12-16 22:16:48.496923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.325 [2024-12-16 22:16:48.496931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.325 [2024-12-16 22:16:48.496941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.325 [2024-12-16 22:16:48.496949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.325 [2024-12-16 22:16:48.496957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.325 [2024-12-16 22:16:48.496967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.325 [2024-12-16 22:16:48.496976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.496983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.325 [2024-12-16 22:16:48.496993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.325 [2024-12-16 22:16:48.497018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.325 [2024-12-16 22:16:48.497041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.325 [2024-12-16 22:16:48.497065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.325 [2024-12-16 22:16:48.497091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.325 [2024-12-16 22:16:48.497114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.325 [2024-12-16 22:16:48.497129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.325 [2024-12-16 22:16:48.497136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.325 [2024-12-16 22:16:48.497143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.325 [2024-12-16 22:16:48.497151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.325 [2024-12-16 22:16:48.497159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.325 [2024-12-16 22:16:48.497167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.325 [2024-12-16 22:16:48.497181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.325 [2024-12-16 22:16:48.497188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497198] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.325 [2024-12-16 22:16:48.497210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.325 [2024-12-16 22:16:48.497217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.325 [2024-12-16 22:16:48.497235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.325 [2024-12-16 22:16:48.497241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.325 [2024-12-16 22:16:48.497248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.325 [2024-12-16 22:16:48.497256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.325 [2024-12-16 22:16:48.497262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.325 [2024-12-16 22:16:48.497268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.325 [2024-12-16 22:16:48.497276] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.325 [2024-12-16 22:16:48.497285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.325 [2024-12-16 22:16:48.497301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.325 [2024-12-16 22:16:48.497308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.325 [2024-12-16 22:16:48.497316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.325 [2024-12-16 22:16:48.497325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.325 [2024-12-16 22:16:48.497331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.325 [2024-12-16 22:16:48.497338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.325 [2024-12-16 22:16:48.497344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.325 [2024-12-16 22:16:48.497351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.325 [2024-12-16 22:16:48.497359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.325 [2024-12-16 22:16:48.497393] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.325 [2024-12-16 22:16:48.497405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.325 [2024-12-16 22:16:48.497421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.325 [2024-12-16 22:16:48.497429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.325 [2024-12-16 22:16:48.497437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.325 [2024-12-16 22:16:48.497447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-12-16 22:16:48.497458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.325 [2024-12-16 22:16:48.497469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:20:42.325 [2024-12-16 22:16:48.497480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.325 [2024-12-16 22:16:48.511382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-12-16 22:16:48.511434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.325 [2024-12-16 22:16:48.511446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.852 ms 00:20:42.325 [2024-12-16 22:16:48.511454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.325 [2024-12-16 22:16:48.511542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.325 [2024-12-16 22:16:48.511557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.325 [2024-12-16 22:16:48.511565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:42.326 [2024-12-16 22:16:48.511574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.530931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.530983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.326 [2024-12-16 22:16:48.530996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.296 ms 00:20:42.326 [2024-12-16 22:16:48.531013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.531061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.531071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.326 [2024-12-16 22:16:48.531080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.326 [2024-12-16 22:16:48.531089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.531655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.531693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.326 [2024-12-16 22:16:48.531706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:20:42.326 [2024-12-16 22:16:48.531715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.531884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.531896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.326 [2024-12-16 22:16:48.531906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:42.326 [2024-12-16 22:16:48.531915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.539381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.539427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.326 [2024-12-16 22:16:48.539438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.439 ms 00:20:42.326 [2024-12-16 22:16:48.539446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.543281] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:42.326 [2024-12-16 22:16:48.543336] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.326 [2024-12-16 22:16:48.543348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.543357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.326 [2024-12-16 22:16:48.543365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:20:42.326 [2024-12-16 22:16:48.543372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.558893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.558944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.326 [2024-12-16 22:16:48.558956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.459 ms 00:20:42.326 [2024-12-16 22:16:48.558964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.561903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.561945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.326 [2024-12-16 22:16:48.561956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:20:42.326 [2024-12-16 22:16:48.561964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.564654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.564703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.326 [2024-12-16 22:16:48.564714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:20:42.326 [2024-12-16 22:16:48.564721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.565098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.565113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.326 [2024-12-16 22:16:48.565123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:20:42.326 [2024-12-16 22:16:48.565130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.589044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.589111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.326 [2024-12-16 22:16:48.589124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.895 ms 00:20:42.326 [2024-12-16 22:16:48.589133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.597286] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:42.326 [2024-12-16 22:16:48.600267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.600315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.326 [2024-12-16 22:16:48.600327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.083 ms 00:20:42.326 [2024-12-16 22:16:48.600343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.600419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.600430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.326 [2024-12-16 22:16:48.600439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:42.326 [2024-12-16 22:16:48.600447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.600515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.600526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.326 [2024-12-16 22:16:48.600538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:42.326 [2024-12-16 22:16:48.600546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.600566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.600574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.326 [2024-12-16 22:16:48.600582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.326 [2024-12-16 22:16:48.600590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.600626] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.326 [2024-12-16 22:16:48.600641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.600649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.326 [2024-12-16 22:16:48.600656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:42.326 [2024-12-16 22:16:48.600667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.605797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.605893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.326 [2024-12-16 22:16:48.605905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.113 ms 00:20:42.326 [2024-12-16 22:16:48.605913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.605989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.326 [2024-12-16 22:16:48.605999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.326 [2024-12-16 22:16:48.606008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:42.326 [2024-12-16 22:16:48.606022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.326 [2024-12-16 22:16:48.608401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.943 ms, result 0 00:20:43.271  [2024-12-16T22:16:51.025Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-16T22:16:51.967Z] Copying: 55/1024 [MB] (45 MBps) [2024-12-16T22:16:52.910Z] Copying: 70/1024 [MB] (15 MBps) [2024-12-16T22:16:53.852Z] Copying: 95/1024 [MB] (25 MBps) [2024-12-16T22:16:54.796Z] Copying: 118/1024 [MB] (22 MBps) [2024-12-16T22:16:55.738Z] Copying: 143/1024 [MB] (25 MBps) [2024-12-16T22:16:56.681Z] Copying: 160/1024 [MB] (17 MBps) [2024-12-16T22:16:57.626Z] Copying: 175/1024 [MB] (14 MBps) [2024-12-16T22:16:59.013Z] Copying: 193/1024 [MB] (18 MBps) [2024-12-16T22:16:59.957Z] Copying: 204/1024 [MB] (11 MBps) [2024-12-16T22:17:00.902Z] Copying: 214/1024 [MB] (10 MBps) [2024-12-16T22:17:01.845Z] Copying: 225/1024 [MB] (10 MBps) [2024-12-16T22:17:02.791Z] Copying: 241/1024 [MB] (16 MBps) [2024-12-16T22:17:03.735Z] Copying: 260/1024 [MB] (18 MBps) [2024-12-16T22:17:04.679Z] Copying: 272/1024 [MB] (11 MBps) [2024-12-16T22:17:05.622Z] Copying: 287/1024 [MB] (15 MBps) [2024-12-16T22:17:07.009Z] Copying: 306/1024 [MB] (19 MBps) [2024-12-16T22:17:07.952Z] Copying: 319/1024 [MB] (12 MBps) [2024-12-16T22:17:08.896Z] Copying: 330/1024 [MB] (10 MBps) [2024-12-16T22:17:09.838Z] Copying: 351/1024 [MB] (21 MBps) [2024-12-16T22:17:10.783Z] Copying: 365/1024 [MB] (14 MBps) [2024-12-16T22:17:11.738Z] Copying: 377/1024 [MB] (11 MBps) [2024-12-16T22:17:12.686Z] Copying: 388/1024 [MB] (10 MBps) [2024-12-16T22:17:13.630Z] Copying: 407/1024 [MB] (19 MBps) [2024-12-16T22:17:15.014Z] Copying: 417/1024 [MB] (10 MBps) [2024-12-16T22:17:15.958Z] Copying: 459/1024 [MB] (41 MBps) [2024-12-16T22:17:16.901Z] Copying: 511/1024 [MB] (52 MBps) [2024-12-16T22:17:17.843Z] Copying: 556/1024 [MB] (45 MBps) [2024-12-16T22:17:18.785Z] Copying: 608/1024 [MB] (52 MBps) [2024-12-16T22:17:19.728Z] Copying: 660/1024 [MB] (52 MBps) [2024-12-16T22:17:20.673Z] Copying: 683/1024 [MB] (22 MBps) [2024-12-16T22:17:21.618Z] Copying: 697/1024 [MB] (14 MBps) [2024-12-16T22:17:23.006Z] Copying: 714/1024 [MB] (17 MBps) [2024-12-16T22:17:23.949Z] Copying: 735/1024 [MB] (20 MBps) [2024-12-16T22:17:24.894Z] Copying: 752/1024 [MB] (17 MBps) [2024-12-16T22:17:25.842Z] Copying: 770/1024 [MB] (18 MBps) [2024-12-16T22:17:26.788Z] Copying: 787/1024 [MB] (16 MBps) [2024-12-16T22:17:27.735Z] Copying: 802/1024 [MB] (14 MBps) [2024-12-16T22:17:28.680Z] Copying: 819/1024 [MB] (17 MBps) [2024-12-16T22:17:29.624Z] Copying: 835/1024 [MB] (15 MBps) [2024-12-16T22:17:31.013Z] Copying: 848/1024 [MB] (12 MBps) [2024-12-16T22:17:31.957Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-16T22:17:32.903Z] Copying: 868/1024 [MB] (10 MBps) [2024-12-16T22:17:33.857Z] Copying: 879/1024 [MB] (10 MBps) [2024-12-16T22:17:34.802Z] Copying: 890/1024 [MB] (11 MBps) [2024-12-16T22:17:35.745Z] Copying: 924/1024 [MB] (34 MBps) [2024-12-16T22:17:36.691Z] Copying: 946/1024 [MB] (21 MBps) [2024-12-16T22:17:37.703Z] Copying: 957/1024 [MB] (10 MBps) [2024-12-16T22:17:38.650Z] Copying: 977/1024 [MB] (20 MBps) [2024-12-16T22:17:40.042Z] Copying: 997/1024 [MB] (19 MBps) [2024-12-16T22:17:40.618Z] Copying: 1012/1024 [MB] (15 MBps) [2024-12-16T22:17:40.618Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-16 22:17:40.520820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.520897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.271 [2024-12-16 22:17:40.520913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.271 [2024-12-16 22:17:40.520931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.520953] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.271 [2024-12-16 22:17:40.521644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.521669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.271 [2024-12-16 22:17:40.521689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:21:34.271 [2024-12-16 22:17:40.521698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.523981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.524030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.271 [2024-12-16 22:17:40.524040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:21:34.271 [2024-12-16 22:17:40.524048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.541250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.541303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.271 [2024-12-16 22:17:40.541315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.179 ms 00:21:34.271 [2024-12-16 22:17:40.541324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.547462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.547506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:34.271 [2024-12-16 22:17:40.547528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.098 ms 00:21:34.271 [2024-12-16 22:17:40.547535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.550279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.550327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.271 [2024-12-16 22:17:40.550337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:21:34.271 [2024-12-16 22:17:40.550344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.554827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.554902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.271 [2024-12-16 22:17:40.554912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:21:34.271 [2024-12-16 22:17:40.554920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.555046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.555058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.271 [2024-12-16 22:17:40.555068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:34.271 [2024-12-16 22:17:40.555089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.558275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.558326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:34.271 [2024-12-16 22:17:40.558337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:21:34.271 [2024-12-16 22:17:40.558346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.561301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.561348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:34.271 [2024-12-16 22:17:40.561357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:21:34.271 [2024-12-16 22:17:40.561365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.563676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.563724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.271 [2024-12-16 22:17:40.563733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:21:34.271 [2024-12-16 22:17:40.563740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.566144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.271 [2024-12-16 22:17:40.566194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.271 [2024-12-16 22:17:40.566205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.339 ms 00:21:34.271 [2024-12-16 22:17:40.566212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.271 [2024-12-16 22:17:40.566248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.271 [2024-12-16 22:17:40.566263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.271 [2024-12-16 22:17:40.566449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.566992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.272 [2024-12-16 22:17:40.567070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.272 [2024-12-16 22:17:40.567078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:21:34.272 [2024-12-16 22:17:40.567087] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:34.272 [2024-12-16 22:17:40.567094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:34.272 [2024-12-16 22:17:40.567102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:34.272 [2024-12-16 22:17:40.567111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:34.272 [2024-12-16 22:17:40.567117] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.272 [2024-12-16 22:17:40.567127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.273 [2024-12-16 22:17:40.567135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.273 [2024-12-16 22:17:40.567141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.273 [2024-12-16 22:17:40.567156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.273 [2024-12-16 22:17:40.567164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.273 [2024-12-16 22:17:40.567179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.273 [2024-12-16 22:17:40.567188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:21:34.273 [2024-12-16 22:17:40.567199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.569428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.273 [2024-12-16 22:17:40.569467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.273 [2024-12-16 22:17:40.569479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:21:34.273 [2024-12-16 22:17:40.569489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.569616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.273 [2024-12-16 22:17:40.569627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.273 [2024-12-16 22:17:40.569637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:34.273 [2024-12-16 22:17:40.569646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.577023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.577072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.273 [2024-12-16 22:17:40.577083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.577090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.577154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.577162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.273 [2024-12-16 22:17:40.577171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.577179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.577251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.577264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.273 [2024-12-16 22:17:40.577273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.577286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.577301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.577312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.273 [2024-12-16 22:17:40.577323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.577332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.590483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.590538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.273 [2024-12-16 22:17:40.590549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.590557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.273 [2024-12-16 22:17:40.600534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.273 [2024-12-16 22:17:40.600613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.273 [2024-12-16 22:17:40.600690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.273 [2024-12-16 22:17:40.600788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:34.273 [2024-12-16 22:17:40.600918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.600969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.600980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.273 [2024-12-16 22:17:40.600988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.600996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.601039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.273 [2024-12-16 22:17:40.601055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.273 [2024-12-16 22:17:40.601067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.273 [2024-12-16 22:17:40.601080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.273 [2024-12-16 22:17:40.601215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 80.364 ms, result 0 00:21:34.845 00:21:34.845 00:21:34.845 22:17:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:34.845 [2024-12-16 22:17:41.103276] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:34.845 [2024-12-16 22:17:41.103447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90918 ] 00:21:35.106 [2024-12-16 22:17:41.264266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.106 [2024-12-16 22:17:41.294036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.106 [2024-12-16 22:17:41.410419] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.106 [2024-12-16 22:17:41.410512] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.369 [2024-12-16 22:17:41.572230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.369 [2024-12-16 22:17:41.572295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.369 [2024-12-16 22:17:41.572310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.369 [2024-12-16 22:17:41.572319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.369 [2024-12-16 22:17:41.572377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.369 [2024-12-16 22:17:41.572389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.369 [2024-12-16 22:17:41.572399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:35.370 [2024-12-16 22:17:41.572411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.572434] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.370 [2024-12-16 22:17:41.572830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.370 [2024-12-16 22:17:41.572890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.572902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.370 [2024-12-16 22:17:41.572916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:21:35.370 [2024-12-16 22:17:41.572924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.574625] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.370 [2024-12-16 22:17:41.578189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.578243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.370 [2024-12-16 22:17:41.578260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:21:35.370 [2024-12-16 22:17:41.578272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.578346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.578359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.370 [2024-12-16 22:17:41.578368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:35.370 [2024-12-16 22:17:41.578377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.586338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.586385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.370 [2024-12-16 22:17:41.586400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:21:35.370 [2024-12-16 22:17:41.586412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.586511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.586521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.370 [2024-12-16 22:17:41.586530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:35.370 [2024-12-16 22:17:41.586539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.586598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.586610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.370 [2024-12-16 22:17:41.586618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:35.370 [2024-12-16 22:17:41.586629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.586660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.370 [2024-12-16 22:17:41.588695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.588732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.370 [2024-12-16 22:17:41.588743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:21:35.370 [2024-12-16 22:17:41.588750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.588790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.588798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.370 [2024-12-16 22:17:41.588807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:35.370 [2024-12-16 22:17:41.588823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.588865] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.370 [2024-12-16 22:17:41.588888] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.370 [2024-12-16 22:17:41.588930] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.370 [2024-12-16 22:17:41.588949] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.370 [2024-12-16 22:17:41.589055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.370 [2024-12-16 22:17:41.589066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.370 [2024-12-16 22:17:41.589081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.370 [2024-12-16 22:17:41.589092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589102] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589110] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.370 [2024-12-16 22:17:41.589118] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.370 [2024-12-16 22:17:41.589131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.370 [2024-12-16 22:17:41.589140] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.370 [2024-12-16 22:17:41.589147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.589155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.370 [2024-12-16 22:17:41.589163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:35.370 [2024-12-16 22:17:41.589174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.589263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-12-16 22:17:41.589272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.370 [2024-12-16 22:17:41.589279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:35.370 [2024-12-16 22:17:41.589287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-12-16 22:17:41.589382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.370 [2024-12-16 22:17:41.589393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.370 [2024-12-16 22:17:41.589403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.370 [2024-12-16 22:17:41.589435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.370 [2024-12-16 22:17:41.589458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.370 [2024-12-16 22:17:41.589476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.370 [2024-12-16 22:17:41.589486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.370 [2024-12-16 22:17:41.589493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.370 [2024-12-16 22:17:41.589502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.370 [2024-12-16 22:17:41.589510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.370 [2024-12-16 22:17:41.589517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.370 [2024-12-16 22:17:41.589535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.370 [2024-12-16 22:17:41.589560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.370 [2024-12-16 22:17:41.589584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.370 [2024-12-16 22:17:41.589608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.370 [2024-12-16 22:17:41.589639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.370 [2024-12-16 22:17:41.589654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.370 [2024-12-16 22:17:41.589661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.370 [2024-12-16 22:17:41.589669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.370 [2024-12-16 22:17:41.589677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.370 [2024-12-16 22:17:41.589685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.371 [2024-12-16 22:17:41.589692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.371 [2024-12-16 22:17:41.589700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.371 [2024-12-16 22:17:41.589707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.371 [2024-12-16 22:17:41.589716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-12-16 22:17:41.589723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.371 [2024-12-16 22:17:41.589731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.371 [2024-12-16 22:17:41.589740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-12-16 22:17:41.589750] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.371 [2024-12-16 22:17:41.589762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.371 [2024-12-16 22:17:41.589771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.371 [2024-12-16 22:17:41.589779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-12-16 22:17:41.589788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.371 [2024-12-16 22:17:41.589797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.371 [2024-12-16 22:17:41.589807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.371 [2024-12-16 22:17:41.589816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.371 [2024-12-16 22:17:41.589824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.371 [2024-12-16 22:17:41.589832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.371 [2024-12-16 22:17:41.589858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.371 [2024-12-16 22:17:41.589868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.589877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.371 [2024-12-16 22:17:41.589886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.371 [2024-12-16 22:17:41.589895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.371 [2024-12-16 22:17:41.589903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.371 [2024-12-16 22:17:41.589913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.371 [2024-12-16 22:17:41.589935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.371 [2024-12-16 22:17:41.589942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.371 [2024-12-16 22:17:41.589950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.371 [2024-12-16 22:17:41.589957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.371 [2024-12-16 22:17:41.589965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.589973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.589981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.589988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.589995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.371 [2024-12-16 22:17:41.590003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.371 [2024-12-16 22:17:41.590011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.590020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.371 [2024-12-16 22:17:41.590027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.371 [2024-12-16 22:17:41.590035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.371 [2024-12-16 22:17:41.590043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.371 [2024-12-16 22:17:41.590053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.590062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.371 [2024-12-16 22:17:41.590070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:21:35.371 [2024-12-16 22:17:41.590081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.603817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.603899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.371 [2024-12-16 22:17:41.603913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.687 ms 00:21:35.371 [2024-12-16 22:17:41.603921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.604011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.604021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.371 [2024-12-16 22:17:41.604030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:35.371 [2024-12-16 22:17:41.604037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.630966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.631022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.371 [2024-12-16 22:17:41.631035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.868 ms 00:21:35.371 [2024-12-16 22:17:41.631043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.631093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.631112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.371 [2024-12-16 22:17:41.631121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:35.371 [2024-12-16 22:17:41.631132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.631722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.631765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.371 [2024-12-16 22:17:41.631777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:21:35.371 [2024-12-16 22:17:41.631794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.631972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.631989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.371 [2024-12-16 22:17:41.631999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:35.371 [2024-12-16 22:17:41.632008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.639695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.639740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.371 [2024-12-16 22:17:41.639750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.659 ms 00:21:35.371 [2024-12-16 22:17:41.639758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.643558] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:35.371 [2024-12-16 22:17:41.643610] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.371 [2024-12-16 22:17:41.643623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.643631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.371 [2024-12-16 22:17:41.643640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:21:35.371 [2024-12-16 22:17:41.643648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.659797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.659858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.371 [2024-12-16 22:17:41.659871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.090 ms 00:21:35.371 [2024-12-16 22:17:41.659879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.662719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.662765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.371 [2024-12-16 22:17:41.662774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:21:35.371 [2024-12-16 22:17:41.662781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.665272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.665315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.371 [2024-12-16 22:17:41.665325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:21:35.371 [2024-12-16 22:17:41.665332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-12-16 22:17:41.665676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-12-16 22:17:41.665690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.371 [2024-12-16 22:17:41.665703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:35.371 [2024-12-16 22:17:41.665715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.689639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.689698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.372 [2024-12-16 22:17:41.689712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.907 ms 00:21:35.372 [2024-12-16 22:17:41.689721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.698060] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.372 [2024-12-16 22:17:41.700922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.700961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.372 [2024-12-16 22:17:41.700973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.149 ms 00:21:35.372 [2024-12-16 22:17:41.700989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.701065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.701077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.372 [2024-12-16 22:17:41.701087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:35.372 [2024-12-16 22:17:41.701095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.701166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.701177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.372 [2024-12-16 22:17:41.701186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:35.372 [2024-12-16 22:17:41.701194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.701213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.701222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.372 [2024-12-16 22:17:41.701230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:35.372 [2024-12-16 22:17:41.701238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.701278] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.372 [2024-12-16 22:17:41.701295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.701306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.372 [2024-12-16 22:17:41.701314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:35.372 [2024-12-16 22:17:41.701322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.706753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.706799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.372 [2024-12-16 22:17:41.706809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:21:35.372 [2024-12-16 22:17:41.706817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.706910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-12-16 22:17:41.706921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.372 [2024-12-16 22:17:41.707007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:35.372 [2024-12-16 22:17:41.707018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-12-16 22:17:41.708278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.571 ms, result 0 00:21:36.761  [2024-12-16T22:17:44.053Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-16T22:17:44.998Z] Copying: 25/1024 [MB] (13 MBps) [2024-12-16T22:17:45.942Z] Copying: 36/1024 [MB] (10 MBps) [2024-12-16T22:17:47.331Z] Copying: 47/1024 [MB] (11 MBps) [2024-12-16T22:17:47.903Z] Copying: 66/1024 [MB] (19 MBps) [2024-12-16T22:17:49.293Z] Copying: 89/1024 [MB] (22 MBps) [2024-12-16T22:17:50.235Z] Copying: 112/1024 [MB] (23 MBps) [2024-12-16T22:17:51.178Z] Copying: 128/1024 [MB] (15 MBps) [2024-12-16T22:17:52.121Z] Copying: 138/1024 [MB] (10 MBps) [2024-12-16T22:17:53.064Z] Copying: 151/1024 [MB] (12 MBps) [2024-12-16T22:17:54.007Z] Copying: 165/1024 [MB] (14 MBps) [2024-12-16T22:17:54.948Z] Copying: 176/1024 [MB] (10 MBps) [2024-12-16T22:17:55.889Z] Copying: 186/1024 [MB] (10 MBps) [2024-12-16T22:17:57.275Z] Copying: 200/1024 [MB] (13 MBps) [2024-12-16T22:17:58.217Z] Copying: 211/1024 [MB] (11 MBps) [2024-12-16T22:17:59.159Z] Copying: 221/1024 [MB] (10 MBps) [2024-12-16T22:18:00.103Z] Copying: 232/1024 [MB] (10 MBps) [2024-12-16T22:18:01.048Z] Copying: 252/1024 [MB] (19 MBps) [2024-12-16T22:18:01.992Z] Copying: 265/1024 [MB] (13 MBps) [2024-12-16T22:18:02.939Z] Copying: 291/1024 [MB] (25 MBps) [2024-12-16T22:18:03.924Z] Copying: 302/1024 [MB] (11 MBps) [2024-12-16T22:18:05.310Z] Copying: 320/1024 [MB] (18 MBps) [2024-12-16T22:18:06.252Z] Copying: 349/1024 [MB] (28 MBps) [2024-12-16T22:18:07.195Z] Copying: 370/1024 [MB] (21 MBps) [2024-12-16T22:18:08.140Z] Copying: 394/1024 [MB] (24 MBps) [2024-12-16T22:18:09.084Z] Copying: 417/1024 [MB] (23 MBps) [2024-12-16T22:18:10.029Z] Copying: 438/1024 [MB] (21 MBps) [2024-12-16T22:18:10.974Z] Copying: 459/1024 [MB] (20 MBps) [2024-12-16T22:18:11.918Z] Copying: 480/1024 [MB] (21 MBps) [2024-12-16T22:18:13.306Z] Copying: 496/1024 [MB] (15 MBps) [2024-12-16T22:18:14.251Z] Copying: 515/1024 [MB] (19 MBps) [2024-12-16T22:18:15.207Z] Copying: 527/1024 [MB] (12 MBps) [2024-12-16T22:18:16.150Z] Copying: 549/1024 [MB] (22 MBps) [2024-12-16T22:18:17.097Z] Copying: 567/1024 [MB] (17 MBps) [2024-12-16T22:18:18.040Z] Copying: 592/1024 [MB] (24 MBps) [2024-12-16T22:18:18.984Z] Copying: 611/1024 [MB] (18 MBps) [2024-12-16T22:18:19.929Z] Copying: 625/1024 [MB] (14 MBps) [2024-12-16T22:18:21.317Z] Copying: 645/1024 [MB] (19 MBps) [2024-12-16T22:18:21.890Z] Copying: 659/1024 [MB] (14 MBps) [2024-12-16T22:18:23.279Z] Copying: 678/1024 [MB] (18 MBps) [2024-12-16T22:18:24.224Z] Copying: 704/1024 [MB] (26 MBps) [2024-12-16T22:18:25.169Z] Copying: 722/1024 [MB] (17 MBps) [2024-12-16T22:18:26.113Z] Copying: 742/1024 [MB] (19 MBps) [2024-12-16T22:18:27.058Z] Copying: 763/1024 [MB] (21 MBps) [2024-12-16T22:18:28.003Z] Copying: 777/1024 [MB] (14 MBps) [2024-12-16T22:18:28.948Z] Copying: 798/1024 [MB] (20 MBps) [2024-12-16T22:18:29.950Z] Copying: 816/1024 [MB] (18 MBps) [2024-12-16T22:18:30.890Z] Copying: 833/1024 [MB] (16 MBps) [2024-12-16T22:18:32.274Z] Copying: 853/1024 [MB] (20 MBps) [2024-12-16T22:18:33.216Z] Copying: 865/1024 [MB] (11 MBps) [2024-12-16T22:18:34.157Z] Copying: 875/1024 [MB] (10 MBps) [2024-12-16T22:18:35.099Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-16T22:18:36.040Z] Copying: 898/1024 [MB] (11 MBps) [2024-12-16T22:18:36.981Z] Copying: 917/1024 [MB] (19 MBps) [2024-12-16T22:18:37.923Z] Copying: 935/1024 [MB] (18 MBps) [2024-12-16T22:18:39.308Z] Copying: 956/1024 [MB] (20 MBps) [2024-12-16T22:18:40.249Z] Copying: 971/1024 [MB] (15 MBps) [2024-12-16T22:18:41.193Z] Copying: 986/1024 [MB] (15 MBps) [2024-12-16T22:18:42.142Z] Copying: 1009/1024 [MB] (22 MBps) [2024-12-16T22:18:42.142Z] Copying: 1021/1024 [MB] (12 MBps) [2024-12-16T22:18:42.404Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 22:18:42.375072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.375166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:36.057 [2024-12-16 22:18:42.375193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:36.057 [2024-12-16 22:18:42.375205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.375231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:36.057 [2024-12-16 22:18:42.376037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.376067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:36.057 [2024-12-16 22:18:42.376080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:22:36.057 [2024-12-16 22:18:42.376090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.376595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.376620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:36.057 [2024-12-16 22:18:42.376639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:36.057 [2024-12-16 22:18:42.376649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.380816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.380858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:36.057 [2024-12-16 22:18:42.380869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:22:36.057 [2024-12-16 22:18:42.380879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.388030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.388079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:36.057 [2024-12-16 22:18:42.388090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.129 ms 00:22:36.057 [2024-12-16 22:18:42.388106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.391214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.391280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:36.057 [2024-12-16 22:18:42.391295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:22:36.057 [2024-12-16 22:18:42.391303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.395944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.395998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:36.057 [2024-12-16 22:18:42.396011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:22:36.057 [2024-12-16 22:18:42.396020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.396153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.396172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:36.057 [2024-12-16 22:18:42.396197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:36.057 [2024-12-16 22:18:42.396206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.399362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.399409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:36.057 [2024-12-16 22:18:42.399419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:22:36.057 [2024-12-16 22:18:42.399426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.057 [2024-12-16 22:18:42.402330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.057 [2024-12-16 22:18:42.402373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:36.057 [2024-12-16 22:18:42.402383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:22:36.057 [2024-12-16 22:18:42.402390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.321 [2024-12-16 22:18:42.404427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.321 [2024-12-16 22:18:42.404473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:36.321 [2024-12-16 22:18:42.404483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:22:36.321 [2024-12-16 22:18:42.404490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.321 [2024-12-16 22:18:42.406659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.321 [2024-12-16 22:18:42.406709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:36.321 [2024-12-16 22:18:42.406720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.098 ms 00:22:36.321 [2024-12-16 22:18:42.406727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.321 [2024-12-16 22:18:42.406766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:36.321 [2024-12-16 22:18:42.406783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.406998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:36.321 [2024-12-16 22:18:42.407093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:36.322 [2024-12-16 22:18:42.407611] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:36.322 [2024-12-16 22:18:42.407619] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:22:36.322 [2024-12-16 22:18:42.407627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:36.322 [2024-12-16 22:18:42.407635] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:36.322 [2024-12-16 22:18:42.407643] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:36.322 [2024-12-16 22:18:42.407652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:36.322 [2024-12-16 22:18:42.407660] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:36.322 [2024-12-16 22:18:42.407680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:36.322 [2024-12-16 22:18:42.407687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:36.322 [2024-12-16 22:18:42.407701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:36.322 [2024-12-16 22:18:42.407709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:36.322 [2024-12-16 22:18:42.407716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-16 22:18:42.407725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:36.322 [2024-12-16 22:18:42.407734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:22:36.322 [2024-12-16 22:18:42.407743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-16 22:18:42.410059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-16 22:18:42.410102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:36.322 [2024-12-16 22:18:42.410113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:22:36.322 [2024-12-16 22:18:42.410125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.322 [2024-12-16 22:18:42.410250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.322 [2024-12-16 22:18:42.410259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:36.322 [2024-12-16 22:18:42.410268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:36.322 [2024-12-16 22:18:42.410277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.417568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.417618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:36.323 [2024-12-16 22:18:42.417630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.417642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.417700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.417709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:36.323 [2024-12-16 22:18:42.417717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.417725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.417777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.417787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:36.323 [2024-12-16 22:18:42.417796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.417809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.417831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.417858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:36.323 [2024-12-16 22:18:42.417867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.417875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.431411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.431461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:36.323 [2024-12-16 22:18:42.431472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.431489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.441665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.441713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:36.323 [2024-12-16 22:18:42.441725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.441733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.441783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.441792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:36.323 [2024-12-16 22:18:42.441801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.441810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.441866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.441884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:36.323 [2024-12-16 22:18:42.441893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.441901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.441969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.441978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:36.323 [2024-12-16 22:18:42.442012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.442020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.442077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.442090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:36.323 [2024-12-16 22:18:42.442099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.442108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.442147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.442157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:36.323 [2024-12-16 22:18:42.442165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.442173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.442219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:36.323 [2024-12-16 22:18:42.442236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:36.323 [2024-12-16 22:18:42.442245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:36.323 [2024-12-16 22:18:42.442254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.323 [2024-12-16 22:18:42.442383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.287 ms, result 0 00:22:36.323 00:22:36.323 00:22:36.323 22:18:42 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:38.876 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:38.876 22:18:44 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:38.876 [2024-12-16 22:18:44.977947] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:38.876 [2024-12-16 22:18:44.978101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91576 ] 00:22:38.876 [2024-12-16 22:18:45.141097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.876 [2024-12-16 22:18:45.169226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.139 [2024-12-16 22:18:45.295669] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.139 [2024-12-16 22:18:45.296045] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.139 [2024-12-16 22:18:45.463960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.464017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:39.139 [2024-12-16 22:18:45.464032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:39.139 [2024-12-16 22:18:45.464041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.464097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.464108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.139 [2024-12-16 22:18:45.464118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:39.139 [2024-12-16 22:18:45.464125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.464150] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:39.139 [2024-12-16 22:18:45.464457] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:39.139 [2024-12-16 22:18:45.464477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.464486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.139 [2024-12-16 22:18:45.464498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:39.139 [2024-12-16 22:18:45.464506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.466168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:39.139 [2024-12-16 22:18:45.470373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.470428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:39.139 [2024-12-16 22:18:45.470448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:22:39.139 [2024-12-16 22:18:45.470460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.470534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.470553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:39.139 [2024-12-16 22:18:45.470563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:39.139 [2024-12-16 22:18:45.470572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.478462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.478508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.139 [2024-12-16 22:18:45.478525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.847 ms 00:22:39.139 [2024-12-16 22:18:45.478538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.478640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.478651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.139 [2024-12-16 22:18:45.478660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:39.139 [2024-12-16 22:18:45.478668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.478722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.478733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:39.139 [2024-12-16 22:18:45.478742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:39.139 [2024-12-16 22:18:45.478753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.478783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:39.139 [2024-12-16 22:18:45.480833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.480886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.139 [2024-12-16 22:18:45.480904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:22:39.139 [2024-12-16 22:18:45.480915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.480954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.480967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:39.139 [2024-12-16 22:18:45.480976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:39.139 [2024-12-16 22:18:45.480986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.481012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:39.139 [2024-12-16 22:18:45.481034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:39.139 [2024-12-16 22:18:45.481072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:39.139 [2024-12-16 22:18:45.481088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:39.139 [2024-12-16 22:18:45.481199] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:39.139 [2024-12-16 22:18:45.481214] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:39.139 [2024-12-16 22:18:45.481228] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:39.139 [2024-12-16 22:18:45.481238] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:39.139 [2024-12-16 22:18:45.481248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:39.139 [2024-12-16 22:18:45.481256] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:39.139 [2024-12-16 22:18:45.481264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:39.139 [2024-12-16 22:18:45.481276] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:39.139 [2024-12-16 22:18:45.481287] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:39.139 [2024-12-16 22:18:45.481295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.481302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:39.139 [2024-12-16 22:18:45.481310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:22:39.139 [2024-12-16 22:18:45.481320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.481409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.139 [2024-12-16 22:18:45.481421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:39.139 [2024-12-16 22:18:45.481429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:39.139 [2024-12-16 22:18:45.481437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.139 [2024-12-16 22:18:45.481538] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:39.139 [2024-12-16 22:18:45.481553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:39.139 [2024-12-16 22:18:45.481563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.139 [2024-12-16 22:18:45.481578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.139 [2024-12-16 22:18:45.481588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:39.139 [2024-12-16 22:18:45.481596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:39.139 [2024-12-16 22:18:45.481604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:39.139 [2024-12-16 22:18:45.481613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:39.139 [2024-12-16 22:18:45.481622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:39.139 [2024-12-16 22:18:45.481630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.139 [2024-12-16 22:18:45.481639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:39.139 [2024-12-16 22:18:45.481649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:39.139 [2024-12-16 22:18:45.481656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.140 [2024-12-16 22:18:45.481664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:39.140 [2024-12-16 22:18:45.481674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:39.140 [2024-12-16 22:18:45.481683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:39.140 [2024-12-16 22:18:45.481699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:39.140 [2024-12-16 22:18:45.481707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:39.140 [2024-12-16 22:18:45.481723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.140 [2024-12-16 22:18:45.481740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:39.140 [2024-12-16 22:18:45.481747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.140 [2024-12-16 22:18:45.481762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:39.140 [2024-12-16 22:18:45.481770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.140 [2024-12-16 22:18:45.481790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:39.140 [2024-12-16 22:18:45.481798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.140 [2024-12-16 22:18:45.481814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:39.140 [2024-12-16 22:18:45.481822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:39.140 [2024-12-16 22:18:45.481829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.140 [2024-12-16 22:18:45.482091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:39.140 [2024-12-16 22:18:45.482119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:39.140 [2024-12-16 22:18:45.482138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.140 [2024-12-16 22:18:45.482158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:39.140 [2024-12-16 22:18:45.482176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:39.140 [2024-12-16 22:18:45.482194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.140 [2024-12-16 22:18:45.482213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:39.140 [2024-12-16 22:18:45.482231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:39.140 [2024-12-16 22:18:45.482249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.140 [2024-12-16 22:18:45.482273] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:39.140 [2024-12-16 22:18:45.482296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:39.140 [2024-12-16 22:18:45.482316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.140 [2024-12-16 22:18:45.482341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.140 [2024-12-16 22:18:45.482361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:39.140 [2024-12-16 22:18:45.482452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:39.140 [2024-12-16 22:18:45.482476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:39.140 [2024-12-16 22:18:45.482496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:39.140 [2024-12-16 22:18:45.482515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:39.140 [2024-12-16 22:18:45.482533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:39.140 [2024-12-16 22:18:45.482554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:39.140 [2024-12-16 22:18:45.482585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.482615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:39.140 [2024-12-16 22:18:45.482644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:39.140 [2024-12-16 22:18:45.482672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:39.140 [2024-12-16 22:18:45.482700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:39.140 [2024-12-16 22:18:45.482732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:39.140 [2024-12-16 22:18:45.482806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:39.140 [2024-12-16 22:18:45.482859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:39.140 [2024-12-16 22:18:45.482891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:39.140 [2024-12-16 22:18:45.482960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:39.140 [2024-12-16 22:18:45.482990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:39.140 [2024-12-16 22:18:45.483783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:39.140 [2024-12-16 22:18:45.483796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:39.140 [2024-12-16 22:18:45.483824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:39.140 [2024-12-16 22:18:45.483832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:39.140 [2024-12-16 22:18:45.483859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:39.140 [2024-12-16 22:18:45.483875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.140 [2024-12-16 22:18:45.483885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:39.140 [2024-12-16 22:18:45.483896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:22:39.140 [2024-12-16 22:18:45.483908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.497554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.497612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.403 [2024-12-16 22:18:45.497630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.553 ms 00:22:39.403 [2024-12-16 22:18:45.497639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.497729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.497738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.403 [2024-12-16 22:18:45.497747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:39.403 [2024-12-16 22:18:45.497755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.517015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.517070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.403 [2024-12-16 22:18:45.517083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.197 ms 00:22:39.403 [2024-12-16 22:18:45.517092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.517139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.517150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.403 [2024-12-16 22:18:45.517160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.403 [2024-12-16 22:18:45.517168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.517916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.517996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.403 [2024-12-16 22:18:45.518011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:22:39.403 [2024-12-16 22:18:45.518021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.518209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.518220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.403 [2024-12-16 22:18:45.518228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:39.403 [2024-12-16 22:18:45.518237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.526063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.526104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.403 [2024-12-16 22:18:45.526116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.805 ms 00:22:39.403 [2024-12-16 22:18:45.526124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.530098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:39.403 [2024-12-16 22:18:45.530144] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:39.403 [2024-12-16 22:18:45.530166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.530175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:39.403 [2024-12-16 22:18:45.530185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.941 ms 00:22:39.403 [2024-12-16 22:18:45.530193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.545787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.545986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:39.403 [2024-12-16 22:18:45.546016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:22:39.403 [2024-12-16 22:18:45.546025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.549233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.549411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:39.403 [2024-12-16 22:18:45.549431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:22:39.403 [2024-12-16 22:18:45.549439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.403 [2024-12-16 22:18:45.551988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.403 [2024-12-16 22:18:45.552034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:39.403 [2024-12-16 22:18:45.552044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:22:39.404 [2024-12-16 22:18:45.552051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.552415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.552428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.404 [2024-12-16 22:18:45.552438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:39.404 [2024-12-16 22:18:45.552445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.575561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.575781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.404 [2024-12-16 22:18:45.575803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.094 ms 00:22:39.404 [2024-12-16 22:18:45.575812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.583961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.404 [2024-12-16 22:18:45.586830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.586887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.404 [2024-12-16 22:18:45.586899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.958 ms 00:22:39.404 [2024-12-16 22:18:45.586910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.586986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.586997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.404 [2024-12-16 22:18:45.587006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:39.404 [2024-12-16 22:18:45.587014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.587082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.587095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.404 [2024-12-16 22:18:45.587104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:39.404 [2024-12-16 22:18:45.587120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.587140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.587149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.404 [2024-12-16 22:18:45.587158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:39.404 [2024-12-16 22:18:45.587166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.587205] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.404 [2024-12-16 22:18:45.587216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.587224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.404 [2024-12-16 22:18:45.587236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:39.404 [2024-12-16 22:18:45.587245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.592647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.592695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.404 [2024-12-16 22:18:45.592707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.383 ms 00:22:39.404 [2024-12-16 22:18:45.592716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.592800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.404 [2024-12-16 22:18:45.592811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.404 [2024-12-16 22:18:45.592820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:39.404 [2024-12-16 22:18:45.592870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.404 [2024-12-16 22:18:45.594154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.700 ms, result 0 00:22:40.350  [2024-12-16T22:18:47.643Z] Copying: 10144/1048576 [kB] (10144 kBps) [2024-12-16T22:18:49.030Z] Copying: 20384/1048576 [kB] (10240 kBps) [2024-12-16T22:18:49.603Z] Copying: 37/1024 [MB] (17 MBps) [2024-12-16T22:18:50.991Z] Copying: 89/1024 [MB] (52 MBps) [2024-12-16T22:18:51.934Z] Copying: 141/1024 [MB] (52 MBps) [2024-12-16T22:18:52.877Z] Copying: 193/1024 [MB] (52 MBps) [2024-12-16T22:18:53.822Z] Copying: 245/1024 [MB] (52 MBps) [2024-12-16T22:18:54.767Z] Copying: 296/1024 [MB] (50 MBps) [2024-12-16T22:18:55.755Z] Copying: 323/1024 [MB] (26 MBps) [2024-12-16T22:18:56.708Z] Copying: 333/1024 [MB] (10 MBps) [2024-12-16T22:18:57.654Z] Copying: 352/1024 [MB] (19 MBps) [2024-12-16T22:18:59.042Z] Copying: 366/1024 [MB] (13 MBps) [2024-12-16T22:18:59.616Z] Copying: 378/1024 [MB] (12 MBps) [2024-12-16T22:19:01.003Z] Copying: 393/1024 [MB] (14 MBps) [2024-12-16T22:19:01.949Z] Copying: 416/1024 [MB] (23 MBps) [2024-12-16T22:19:02.893Z] Copying: 429/1024 [MB] (12 MBps) [2024-12-16T22:19:03.838Z] Copying: 439/1024 [MB] (10 MBps) [2024-12-16T22:19:04.780Z] Copying: 460/1024 [MB] (20 MBps) [2024-12-16T22:19:05.724Z] Copying: 486/1024 [MB] (26 MBps) [2024-12-16T22:19:06.667Z] Copying: 519/1024 [MB] (32 MBps) [2024-12-16T22:19:07.611Z] Copying: 538/1024 [MB] (19 MBps) [2024-12-16T22:19:08.999Z] Copying: 588/1024 [MB] (50 MBps) [2024-12-16T22:19:09.946Z] Copying: 602/1024 [MB] (14 MBps) [2024-12-16T22:19:10.889Z] Copying: 614/1024 [MB] (11 MBps) [2024-12-16T22:19:11.834Z] Copying: 625/1024 [MB] (11 MBps) [2024-12-16T22:19:12.777Z] Copying: 636/1024 [MB] (10 MBps) [2024-12-16T22:19:13.722Z] Copying: 654/1024 [MB] (18 MBps) [2024-12-16T22:19:14.666Z] Copying: 680176/1048576 [kB] (10168 kBps) [2024-12-16T22:19:15.607Z] Copying: 697/1024 [MB] (33 MBps) [2024-12-16T22:19:16.994Z] Copying: 720/1024 [MB] (23 MBps) [2024-12-16T22:19:17.939Z] Copying: 738/1024 [MB] (18 MBps) [2024-12-16T22:19:18.883Z] Copying: 752/1024 [MB] (13 MBps) [2024-12-16T22:19:19.826Z] Copying: 771/1024 [MB] (19 MBps) [2024-12-16T22:19:20.771Z] Copying: 790/1024 [MB] (19 MBps) [2024-12-16T22:19:21.783Z] Copying: 801/1024 [MB] (10 MBps) [2024-12-16T22:19:22.724Z] Copying: 814/1024 [MB] (13 MBps) [2024-12-16T22:19:23.668Z] Copying: 837/1024 [MB] (22 MBps) [2024-12-16T22:19:24.612Z] Copying: 854/1024 [MB] (16 MBps) [2024-12-16T22:19:25.999Z] Copying: 891/1024 [MB] (37 MBps) [2024-12-16T22:19:26.942Z] Copying: 925/1024 [MB] (33 MBps) [2024-12-16T22:19:27.886Z] Copying: 949/1024 [MB] (24 MBps) [2024-12-16T22:19:28.829Z] Copying: 973/1024 [MB] (24 MBps) [2024-12-16T22:19:29.773Z] Copying: 989/1024 [MB] (15 MBps) [2024-12-16T22:19:30.717Z] Copying: 1008/1024 [MB] (19 MBps) [2024-12-16T22:19:31.662Z] Copying: 1023/1024 [MB] (14 MBps) [2024-12-16T22:19:31.662Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-16 22:19:31.475494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.475572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:25.315 [2024-12-16 22:19:31.475589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.315 [2024-12-16 22:19:31.475599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.479021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:25.315 [2024-12-16 22:19:31.481379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.481579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:25.315 [2024-12-16 22:19:31.481608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:23:25.315 [2024-12-16 22:19:31.481618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.493195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.493245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:25.315 [2024-12-16 22:19:31.493259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.386 ms 00:23:25.315 [2024-12-16 22:19:31.493268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.520082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.520260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:25.315 [2024-12-16 22:19:31.520281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.796 ms 00:23:25.315 [2024-12-16 22:19:31.520301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.526432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.526472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:25.315 [2024-12-16 22:19:31.526483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:23:25.315 [2024-12-16 22:19:31.526492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.529004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.529052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:25.315 [2024-12-16 22:19:31.529062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:23:25.315 [2024-12-16 22:19:31.529070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.315 [2024-12-16 22:19:31.533373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.315 [2024-12-16 22:19:31.533423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:25.315 [2024-12-16 22:19:31.533435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:23:25.315 [2024-12-16 22:19:31.533453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.578 [2024-12-16 22:19:31.805199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.578 [2024-12-16 22:19:31.805275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:25.578 [2024-12-16 22:19:31.805296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 271.698 ms 00:23:25.578 [2024-12-16 22:19:31.805305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.578 [2024-12-16 22:19:31.807618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.578 [2024-12-16 22:19:31.807666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:25.578 [2024-12-16 22:19:31.807676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.295 ms 00:23:25.578 [2024-12-16 22:19:31.807683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.578 [2024-12-16 22:19:31.809399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.578 [2024-12-16 22:19:31.809572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:25.578 [2024-12-16 22:19:31.809590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:23:25.578 [2024-12-16 22:19:31.809597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.578 [2024-12-16 22:19:31.811263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.578 [2024-12-16 22:19:31.811309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:25.578 [2024-12-16 22:19:31.811320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:23:25.578 [2024-12-16 22:19:31.811327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.579 [2024-12-16 22:19:31.812756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.579 [2024-12-16 22:19:31.812933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:25.579 [2024-12-16 22:19:31.812951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:23:25.579 [2024-12-16 22:19:31.812959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.579 [2024-12-16 22:19:31.812993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:25.579 [2024-12-16 22:19:31.813008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104704 / 261120 wr_cnt: 1 state: open 00:23:25.579 [2024-12-16 22:19:31.813020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:25.579 [2024-12-16 22:19:31.813681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:25.580 [2024-12-16 22:19:31.813797] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:25.580 [2024-12-16 22:19:31.813805] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:23:25.580 [2024-12-16 22:19:31.813813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104704 00:23:25.580 [2024-12-16 22:19:31.813832] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105664 00:23:25.580 [2024-12-16 22:19:31.813853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104704 00:23:25.580 [2024-12-16 22:19:31.813862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:23:25.580 [2024-12-16 22:19:31.813870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:25.580 [2024-12-16 22:19:31.813878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:25.580 [2024-12-16 22:19:31.813886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:25.580 [2024-12-16 22:19:31.813900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:25.580 [2024-12-16 22:19:31.813907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:25.580 [2024-12-16 22:19:31.813915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.580 [2024-12-16 22:19:31.813923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:25.580 [2024-12-16 22:19:31.813932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:23:25.580 [2024-12-16 22:19:31.813940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.816200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.580 [2024-12-16 22:19:31.816263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:25.580 [2024-12-16 22:19:31.816285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:23:25.580 [2024-12-16 22:19:31.816293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.816415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.580 [2024-12-16 22:19:31.816425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:25.580 [2024-12-16 22:19:31.816434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:23:25.580 [2024-12-16 22:19:31.816444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.823664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.823830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.580 [2024-12-16 22:19:31.823884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.823893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.823955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.823964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.580 [2024-12-16 22:19:31.823974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.823986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.824050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.824060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.580 [2024-12-16 22:19:31.824069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.824076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.824091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.824106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.580 [2024-12-16 22:19:31.824115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.824122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.837461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.837526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.580 [2024-12-16 22:19:31.837537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.837547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.848557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.848611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.580 [2024-12-16 22:19:31.848633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.848642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.848704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.848715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.580 [2024-12-16 22:19:31.848723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.848732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.848770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.848779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.580 [2024-12-16 22:19:31.848787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.848796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.848897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.848912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.580 [2024-12-16 22:19:31.848921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.848929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.848958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.848968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:25.580 [2024-12-16 22:19:31.848978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.848986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.849031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.849043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.580 [2024-12-16 22:19:31.849052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.849060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.849105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:25.580 [2024-12-16 22:19:31.849115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.580 [2024-12-16 22:19:31.849125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:25.580 [2024-12-16 22:19:31.849134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.580 [2024-12-16 22:19:31.849282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.750 ms, result 0 00:23:26.524 00:23:26.524 00:23:26.524 22:19:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:26.524 [2024-12-16 22:19:32.673667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:26.524 [2024-12-16 22:19:32.673944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92070 ] 00:23:26.524 [2024-12-16 22:19:32.833307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.524 [2024-12-16 22:19:32.861278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.786 [2024-12-16 22:19:32.976674] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:26.786 [2024-12-16 22:19:32.977066] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.048 [2024-12-16 22:19:33.136997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.137055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:27.048 [2024-12-16 22:19:33.137074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:27.048 [2024-12-16 22:19:33.137083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.137138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.137152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.048 [2024-12-16 22:19:33.137162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:27.048 [2024-12-16 22:19:33.137174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.137206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:27.048 [2024-12-16 22:19:33.137469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:27.048 [2024-12-16 22:19:33.137485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.137493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.048 [2024-12-16 22:19:33.137508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:27.048 [2024-12-16 22:19:33.137515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.139286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:27.048 [2024-12-16 22:19:33.142989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.143039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:27.048 [2024-12-16 22:19:33.143057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:23:27.048 [2024-12-16 22:19:33.143069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.143139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.143150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:27.048 [2024-12-16 22:19:33.143164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:27.048 [2024-12-16 22:19:33.143172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.151182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.151379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:27.048 [2024-12-16 22:19:33.151404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.967 ms 00:23:27.048 [2024-12-16 22:19:33.151418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.151522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.151531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:27.048 [2024-12-16 22:19:33.151544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:27.048 [2024-12-16 22:19:33.151551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.151610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.151621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:27.048 [2024-12-16 22:19:33.151629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:27.048 [2024-12-16 22:19:33.151642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.151664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:27.048 [2024-12-16 22:19:33.153646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.153685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:27.048 [2024-12-16 22:19:33.153695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.987 ms 00:23:27.048 [2024-12-16 22:19:33.153702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.048 [2024-12-16 22:19:33.153741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.048 [2024-12-16 22:19:33.153749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:27.048 [2024-12-16 22:19:33.153764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:27.048 [2024-12-16 22:19:33.153778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.049 [2024-12-16 22:19:33.153800] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:27.049 [2024-12-16 22:19:33.153827] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:27.049 [2024-12-16 22:19:33.153888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:27.049 [2024-12-16 22:19:33.153905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:27.049 [2024-12-16 22:19:33.154012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:27.049 [2024-12-16 22:19:33.154027] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:27.049 [2024-12-16 22:19:33.154040] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:27.049 [2024-12-16 22:19:33.154053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154063] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154071] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:27.049 [2024-12-16 22:19:33.154079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:27.049 [2024-12-16 22:19:33.154087] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:27.049 [2024-12-16 22:19:33.154096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:27.049 [2024-12-16 22:19:33.154118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.049 [2024-12-16 22:19:33.154125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:27.049 [2024-12-16 22:19:33.154133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:27.049 [2024-12-16 22:19:33.154140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.049 [2024-12-16 22:19:33.154227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.049 [2024-12-16 22:19:33.154236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:27.049 [2024-12-16 22:19:33.154244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:27.049 [2024-12-16 22:19:33.154251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.049 [2024-12-16 22:19:33.154350] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:27.049 [2024-12-16 22:19:33.154362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:27.049 [2024-12-16 22:19:33.154379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:27.049 [2024-12-16 22:19:33.154411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:27.049 [2024-12-16 22:19:33.154437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.049 [2024-12-16 22:19:33.154452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:27.049 [2024-12-16 22:19:33.154462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:27.049 [2024-12-16 22:19:33.154471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:27.049 [2024-12-16 22:19:33.154479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:27.049 [2024-12-16 22:19:33.154487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:27.049 [2024-12-16 22:19:33.154495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:27.049 [2024-12-16 22:19:33.154510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:27.049 [2024-12-16 22:19:33.154536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:27.049 [2024-12-16 22:19:33.154560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:27.049 [2024-12-16 22:19:33.154584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:27.049 [2024-12-16 22:19:33.154608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:27.049 [2024-12-16 22:19:33.154631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.049 [2024-12-16 22:19:33.154650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:27.049 [2024-12-16 22:19:33.154657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:27.049 [2024-12-16 22:19:33.154665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:27.049 [2024-12-16 22:19:33.154674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:27.049 [2024-12-16 22:19:33.154681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:27.049 [2024-12-16 22:19:33.154689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:27.049 [2024-12-16 22:19:33.154707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:27.049 [2024-12-16 22:19:33.154715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:27.049 [2024-12-16 22:19:33.154738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:27.049 [2024-12-16 22:19:33.154746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:27.049 [2024-12-16 22:19:33.154764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:27.049 [2024-12-16 22:19:33.154772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:27.049 [2024-12-16 22:19:33.154779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:27.049 [2024-12-16 22:19:33.154788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:27.049 [2024-12-16 22:19:33.154796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:27.049 [2024-12-16 22:19:33.154803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:27.049 [2024-12-16 22:19:33.154812] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:27.049 [2024-12-16 22:19:33.154821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:27.049 [2024-12-16 22:19:33.154865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:27.049 [2024-12-16 22:19:33.154873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:27.049 [2024-12-16 22:19:33.154880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:27.049 [2024-12-16 22:19:33.154888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:27.049 [2024-12-16 22:19:33.154895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:27.049 [2024-12-16 22:19:33.154902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:27.049 [2024-12-16 22:19:33.154909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:27.049 [2024-12-16 22:19:33.154916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:27.049 [2024-12-16 22:19:33.154923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:27.049 [2024-12-16 22:19:33.154962] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:27.049 [2024-12-16 22:19:33.154971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:27.049 [2024-12-16 22:19:33.154992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:27.049 [2024-12-16 22:19:33.155000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:27.049 [2024-12-16 22:19:33.155007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:27.049 [2024-12-16 22:19:33.155016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.049 [2024-12-16 22:19:33.155025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:27.049 [2024-12-16 22:19:33.155034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:23:27.050 [2024-12-16 22:19:33.155045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.168629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.168678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:27.050 [2024-12-16 22:19:33.168691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.536 ms 00:23:27.050 [2024-12-16 22:19:33.168700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.168791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.168800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:27.050 [2024-12-16 22:19:33.168809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:27.050 [2024-12-16 22:19:33.168818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.196614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.196711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:27.050 [2024-12-16 22:19:33.196738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.704 ms 00:23:27.050 [2024-12-16 22:19:33.196756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.196888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.196912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:27.050 [2024-12-16 22:19:33.196930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:27.050 [2024-12-16 22:19:33.196948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.197645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.197703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:27.050 [2024-12-16 22:19:33.197724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:23:27.050 [2024-12-16 22:19:33.197742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.198046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.198075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:27.050 [2024-12-16 22:19:33.198095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:23:27.050 [2024-12-16 22:19:33.198160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.206022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.206215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:27.050 [2024-12-16 22:19:33.206233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.820 ms 00:23:27.050 [2024-12-16 22:19:33.206241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.209877] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:27.050 [2024-12-16 22:19:33.209920] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:27.050 [2024-12-16 22:19:33.209937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.209946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:27.050 [2024-12-16 22:19:33.209954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.594 ms 00:23:27.050 [2024-12-16 22:19:33.209961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.225658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.225849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:27.050 [2024-12-16 22:19:33.225881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.644 ms 00:23:27.050 [2024-12-16 22:19:33.225889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.228647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.228776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:27.050 [2024-12-16 22:19:33.228787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:23:27.050 [2024-12-16 22:19:33.228794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.231082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.231127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:27.050 [2024-12-16 22:19:33.231137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:23:27.050 [2024-12-16 22:19:33.231144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.231485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.231497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:27.050 [2024-12-16 22:19:33.231511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:23:27.050 [2024-12-16 22:19:33.231522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.255213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.255280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:27.050 [2024-12-16 22:19:33.255295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.662 ms 00:23:27.050 [2024-12-16 22:19:33.255303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.263720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:27.050 [2024-12-16 22:19:33.266896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.266937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:27.050 [2024-12-16 22:19:33.266966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.540 ms 00:23:27.050 [2024-12-16 22:19:33.266976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.267060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.267072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:27.050 [2024-12-16 22:19:33.267082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:27.050 [2024-12-16 22:19:33.267090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.268910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.268956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:27.050 [2024-12-16 22:19:33.268968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:23:27.050 [2024-12-16 22:19:33.268975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.269004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.269013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:27.050 [2024-12-16 22:19:33.269023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:27.050 [2024-12-16 22:19:33.269031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.269071] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:27.050 [2024-12-16 22:19:33.269082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.269091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:27.050 [2024-12-16 22:19:33.269102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:27.050 [2024-12-16 22:19:33.269111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.274888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.274934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:27.050 [2024-12-16 22:19:33.274945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.758 ms 00:23:27.050 [2024-12-16 22:19:33.274953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.275035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.050 [2024-12-16 22:19:33.275045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:27.050 [2024-12-16 22:19:33.275054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:27.050 [2024-12-16 22:19:33.275070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.050 [2024-12-16 22:19:33.276531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.017 ms, result 0 00:23:28.437  [2024-12-16T22:19:35.728Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T22:19:36.671Z] Copying: 29/1024 [MB] (16 MBps) [2024-12-16T22:19:37.614Z] Copying: 40/1024 [MB] (10 MBps) [2024-12-16T22:19:38.556Z] Copying: 64/1024 [MB] (23 MBps) [2024-12-16T22:19:39.498Z] Copying: 78/1024 [MB] (14 MBps) [2024-12-16T22:19:40.881Z] Copying: 96/1024 [MB] (17 MBps) [2024-12-16T22:19:41.824Z] Copying: 117/1024 [MB] (21 MBps) [2024-12-16T22:19:42.768Z] Copying: 134/1024 [MB] (17 MBps) [2024-12-16T22:19:43.711Z] Copying: 150/1024 [MB] (16 MBps) [2024-12-16T22:19:44.654Z] Copying: 170/1024 [MB] (20 MBps) [2024-12-16T22:19:45.598Z] Copying: 184/1024 [MB] (13 MBps) [2024-12-16T22:19:46.540Z] Copying: 202/1024 [MB] (18 MBps) [2024-12-16T22:19:47.549Z] Copying: 220/1024 [MB] (17 MBps) [2024-12-16T22:19:48.492Z] Copying: 236/1024 [MB] (15 MBps) [2024-12-16T22:19:49.877Z] Copying: 253/1024 [MB] (16 MBps) [2024-12-16T22:19:50.820Z] Copying: 269/1024 [MB] (16 MBps) [2024-12-16T22:19:51.763Z] Copying: 297/1024 [MB] (27 MBps) [2024-12-16T22:19:52.707Z] Copying: 317/1024 [MB] (20 MBps) [2024-12-16T22:19:53.650Z] Copying: 337/1024 [MB] (19 MBps) [2024-12-16T22:19:54.594Z] Copying: 355/1024 [MB] (17 MBps) [2024-12-16T22:19:55.537Z] Copying: 370/1024 [MB] (15 MBps) [2024-12-16T22:19:56.479Z] Copying: 384/1024 [MB] (13 MBps) [2024-12-16T22:19:57.467Z] Copying: 394/1024 [MB] (10 MBps) [2024-12-16T22:19:58.852Z] Copying: 405/1024 [MB] (10 MBps) [2024-12-16T22:19:59.795Z] Copying: 421/1024 [MB] (15 MBps) [2024-12-16T22:20:00.738Z] Copying: 436/1024 [MB] (15 MBps) [2024-12-16T22:20:01.679Z] Copying: 453/1024 [MB] (17 MBps) [2024-12-16T22:20:02.621Z] Copying: 473/1024 [MB] (19 MBps) [2024-12-16T22:20:03.566Z] Copying: 488/1024 [MB] (15 MBps) [2024-12-16T22:20:04.510Z] Copying: 508/1024 [MB] (20 MBps) [2024-12-16T22:20:05.894Z] Copying: 525/1024 [MB] (16 MBps) [2024-12-16T22:20:06.466Z] Copying: 543/1024 [MB] (18 MBps) [2024-12-16T22:20:07.851Z] Copying: 553/1024 [MB] (10 MBps) [2024-12-16T22:20:08.795Z] Copying: 564/1024 [MB] (10 MBps) [2024-12-16T22:20:09.740Z] Copying: 575/1024 [MB] (10 MBps) [2024-12-16T22:20:10.686Z] Copying: 588/1024 [MB] (13 MBps) [2024-12-16T22:20:11.630Z] Copying: 610/1024 [MB] (21 MBps) [2024-12-16T22:20:12.575Z] Copying: 626/1024 [MB] (16 MBps) [2024-12-16T22:20:13.548Z] Copying: 639/1024 [MB] (12 MBps) [2024-12-16T22:20:14.504Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-16T22:20:15.890Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-16T22:20:16.832Z] Copying: 670/1024 [MB] (10 MBps) [2024-12-16T22:20:17.776Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-16T22:20:18.719Z] Copying: 701/1024 [MB] (19 MBps) [2024-12-16T22:20:19.663Z] Copying: 711/1024 [MB] (10 MBps) [2024-12-16T22:20:20.606Z] Copying: 725/1024 [MB] (13 MBps) [2024-12-16T22:20:21.550Z] Copying: 745/1024 [MB] (19 MBps) [2024-12-16T22:20:22.494Z] Copying: 759/1024 [MB] (14 MBps) [2024-12-16T22:20:23.882Z] Copying: 778/1024 [MB] (19 MBps) [2024-12-16T22:20:24.826Z] Copying: 801/1024 [MB] (22 MBps) [2024-12-16T22:20:25.769Z] Copying: 821/1024 [MB] (19 MBps) [2024-12-16T22:20:26.713Z] Copying: 837/1024 [MB] (15 MBps) [2024-12-16T22:20:27.658Z] Copying: 853/1024 [MB] (15 MBps) [2024-12-16T22:20:28.601Z] Copying: 873/1024 [MB] (20 MBps) [2024-12-16T22:20:29.546Z] Copying: 898/1024 [MB] (25 MBps) [2024-12-16T22:20:30.490Z] Copying: 912/1024 [MB] (13 MBps) [2024-12-16T22:20:31.879Z] Copying: 925/1024 [MB] (12 MBps) [2024-12-16T22:20:32.834Z] Copying: 936/1024 [MB] (11 MBps) [2024-12-16T22:20:33.780Z] Copying: 951/1024 [MB] (14 MBps) [2024-12-16T22:20:34.724Z] Copying: 971/1024 [MB] (20 MBps) [2024-12-16T22:20:35.658Z] Copying: 987/1024 [MB] (16 MBps) [2024-12-16T22:20:35.917Z] Copying: 1013/1024 [MB] (25 MBps) [2024-12-16T22:20:35.917Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 22:20:35.871668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.871729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:29.570 [2024-12-16 22:20:35.871744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:29.570 [2024-12-16 22:20:35.871753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.871776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.570 [2024-12-16 22:20:35.872261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.872280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:29.570 [2024-12-16 22:20:35.872294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:24:29.570 [2024-12-16 22:20:35.872302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.872543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.872554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:29.570 [2024-12-16 22:20:35.872564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:29.570 [2024-12-16 22:20:35.872574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.879227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.879257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:29.570 [2024-12-16 22:20:35.879266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:24:29.570 [2024-12-16 22:20:35.879279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.885588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.885610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:29.570 [2024-12-16 22:20:35.885621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.277 ms 00:24:29.570 [2024-12-16 22:20:35.885628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.887510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.887538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:29.570 [2024-12-16 22:20:35.887547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:24:29.570 [2024-12-16 22:20:35.887555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.570 [2024-12-16 22:20:35.891640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.570 [2024-12-16 22:20:35.891670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:29.570 [2024-12-16 22:20:35.891679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:24:29.570 [2024-12-16 22:20:35.891692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.061214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.830 [2024-12-16 22:20:36.061241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:29.830 [2024-12-16 22:20:36.061251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 169.489 ms 00:24:29.830 [2024-12-16 22:20:36.061267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.063930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.830 [2024-12-16 22:20:36.063957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:29.830 [2024-12-16 22:20:36.063966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:24:29.830 [2024-12-16 22:20:36.063972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.066140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.830 [2024-12-16 22:20:36.066164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:29.830 [2024-12-16 22:20:36.066172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:24:29.830 [2024-12-16 22:20:36.066179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.067896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.830 [2024-12-16 22:20:36.067920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:29.830 [2024-12-16 22:20:36.067928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:24:29.830 [2024-12-16 22:20:36.067935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.069714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.830 [2024-12-16 22:20:36.069743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:29.830 [2024-12-16 22:20:36.069752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:24:29.830 [2024-12-16 22:20:36.069760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.830 [2024-12-16 22:20:36.069788] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:29.830 [2024-12-16 22:20:36.069812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:24:29.830 [2024-12-16 22:20:36.069823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.069997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:29.830 [2024-12-16 22:20:36.070165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:29.831 [2024-12-16 22:20:36.070583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:29.831 [2024-12-16 22:20:36.070590] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1bd74c0-8120-4477-a5a8-3d7d2ecaf716 00:24:29.831 [2024-12-16 22:20:36.070602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:24:29.831 [2024-12-16 22:20:36.070614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28096 00:24:29.831 [2024-12-16 22:20:36.070620] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27136 00:24:29.831 [2024-12-16 22:20:36.070632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0354 00:24:29.831 [2024-12-16 22:20:36.070639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:29.831 [2024-12-16 22:20:36.070649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:29.831 [2024-12-16 22:20:36.070656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:29.831 [2024-12-16 22:20:36.070662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:29.831 [2024-12-16 22:20:36.070674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:29.831 [2024-12-16 22:20:36.070680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.831 [2024-12-16 22:20:36.070688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:29.831 [2024-12-16 22:20:36.070695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:24:29.831 [2024-12-16 22:20:36.070702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.072080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.831 [2024-12-16 22:20:36.072095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:29.831 [2024-12-16 22:20:36.072105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:24:29.831 [2024-12-16 22:20:36.072112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.072185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.831 [2024-12-16 22:20:36.072194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:29.831 [2024-12-16 22:20:36.072202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:29.831 [2024-12-16 22:20:36.072212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.076911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.076936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.831 [2024-12-16 22:20:36.076945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.831 [2024-12-16 22:20:36.076952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.076998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.077006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.831 [2024-12-16 22:20:36.077013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.831 [2024-12-16 22:20:36.077023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.077071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.077079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.831 [2024-12-16 22:20:36.077091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.831 [2024-12-16 22:20:36.077098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.077112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.077119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.831 [2024-12-16 22:20:36.077126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.831 [2024-12-16 22:20:36.077133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.085442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.085474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.831 [2024-12-16 22:20:36.085483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.831 [2024-12-16 22:20:36.085492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.831 [2024-12-16 22:20:36.092340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.831 [2024-12-16 22:20:36.092379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.831 [2024-12-16 22:20:36.092391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.832 [2024-12-16 22:20:36.092463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.832 [2024-12-16 22:20:36.092528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.832 [2024-12-16 22:20:36.092613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:29.832 [2024-12-16 22:20:36.092667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.832 [2024-12-16 22:20:36.092730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.832 [2024-12-16 22:20:36.092785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.832 [2024-12-16 22:20:36.092793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.832 [2024-12-16 22:20:36.092800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.832 [2024-12-16 22:20:36.092920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 221.232 ms, result 0 00:24:30.090 00:24:30.090 00:24:30.090 22:20:36 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:32.621 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.621 Process with pid 90161 is not found 00:24:32.621 Remove shared memory files 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90161 00:24:32.621 22:20:38 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90161 ']' 00:24:32.621 22:20:38 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90161 00:24:32.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90161) - No such process 00:24:32.621 22:20:38 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 90161 is not found' 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:32.621 22:20:38 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:32.621 ************************************ 00:24:32.621 END TEST ftl_restore 00:24:32.621 ************************************ 00:24:32.621 00:24:32.621 real 4m9.594s 00:24:32.621 user 3m58.291s 00:24:32.621 sys 0m11.422s 00:24:32.621 22:20:38 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.621 22:20:38 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:32.621 22:20:38 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:32.621 22:20:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:32.621 22:20:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.621 22:20:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:32.621 ************************************ 00:24:32.621 START TEST ftl_dirty_shutdown 00:24:32.621 ************************************ 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:32.621 * Looking for test storage... 00:24:32.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.621 --rc genhtml_branch_coverage=1 00:24:32.621 --rc genhtml_function_coverage=1 00:24:32.621 --rc genhtml_legend=1 00:24:32.621 --rc geninfo_all_blocks=1 00:24:32.621 --rc geninfo_unexecuted_blocks=1 00:24:32.621 00:24:32.621 ' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.621 --rc genhtml_branch_coverage=1 00:24:32.621 --rc genhtml_function_coverage=1 00:24:32.621 --rc genhtml_legend=1 00:24:32.621 --rc geninfo_all_blocks=1 00:24:32.621 --rc geninfo_unexecuted_blocks=1 00:24:32.621 00:24:32.621 ' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.621 --rc genhtml_branch_coverage=1 00:24:32.621 --rc genhtml_function_coverage=1 00:24:32.621 --rc genhtml_legend=1 00:24:32.621 --rc geninfo_all_blocks=1 00:24:32.621 --rc geninfo_unexecuted_blocks=1 00:24:32.621 00:24:32.621 ' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.621 --rc genhtml_branch_coverage=1 00:24:32.621 --rc genhtml_function_coverage=1 00:24:32.621 --rc genhtml_legend=1 00:24:32.621 --rc geninfo_all_blocks=1 00:24:32.621 --rc geninfo_unexecuted_blocks=1 00:24:32.621 00:24:32.621 ' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:32.621 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92819 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92819 00:24:32.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92819 ']' 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:32.622 22:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:32.622 [2024-12-16 22:20:38.839577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:32.622 [2024-12-16 22:20:38.839691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92819 ] 00:24:32.881 [2024-12-16 22:20:38.998362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.881 [2024-12-16 22:20:39.017598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:33.520 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:33.778 22:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:34.036 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:34.036 { 00:24:34.036 "name": "nvme0n1", 00:24:34.036 "aliases": [ 00:24:34.036 "639c9bd3-e193-4fbb-99d4-fbf9cd7bf74c" 00:24:34.036 ], 00:24:34.036 "product_name": "NVMe disk", 00:24:34.036 "block_size": 4096, 00:24:34.036 "num_blocks": 1310720, 00:24:34.036 "uuid": "639c9bd3-e193-4fbb-99d4-fbf9cd7bf74c", 00:24:34.036 "numa_id": -1, 00:24:34.036 "assigned_rate_limits": { 00:24:34.036 "rw_ios_per_sec": 0, 00:24:34.036 "rw_mbytes_per_sec": 0, 00:24:34.036 "r_mbytes_per_sec": 0, 00:24:34.036 "w_mbytes_per_sec": 0 00:24:34.036 }, 00:24:34.036 "claimed": true, 00:24:34.036 "claim_type": "read_many_write_one", 00:24:34.036 "zoned": false, 00:24:34.036 "supported_io_types": { 00:24:34.036 "read": true, 00:24:34.036 "write": true, 00:24:34.036 "unmap": true, 00:24:34.036 "flush": true, 00:24:34.036 "reset": true, 00:24:34.036 "nvme_admin": true, 00:24:34.036 "nvme_io": true, 00:24:34.036 "nvme_io_md": false, 00:24:34.036 "write_zeroes": true, 00:24:34.036 "zcopy": false, 00:24:34.036 "get_zone_info": false, 00:24:34.036 "zone_management": false, 00:24:34.036 "zone_append": false, 00:24:34.036 "compare": true, 00:24:34.036 "compare_and_write": false, 00:24:34.036 "abort": true, 00:24:34.036 "seek_hole": false, 00:24:34.036 "seek_data": false, 00:24:34.036 "copy": true, 00:24:34.036 "nvme_iov_md": false 00:24:34.036 }, 00:24:34.036 "driver_specific": { 00:24:34.036 "nvme": [ 00:24:34.036 { 00:24:34.036 "pci_address": "0000:00:11.0", 00:24:34.036 "trid": { 00:24:34.036 "trtype": "PCIe", 00:24:34.036 "traddr": "0000:00:11.0" 00:24:34.036 }, 00:24:34.036 "ctrlr_data": { 00:24:34.036 "cntlid": 0, 00:24:34.036 "vendor_id": "0x1b36", 00:24:34.036 "model_number": "QEMU NVMe Ctrl", 00:24:34.036 "serial_number": "12341", 00:24:34.036 "firmware_revision": "8.0.0", 00:24:34.036 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:34.036 "oacs": { 00:24:34.036 "security": 0, 00:24:34.037 "format": 1, 00:24:34.037 "firmware": 0, 00:24:34.037 "ns_manage": 1 00:24:34.037 }, 00:24:34.037 "multi_ctrlr": false, 00:24:34.037 "ana_reporting": false 00:24:34.037 }, 00:24:34.037 "vs": { 00:24:34.037 "nvme_version": "1.4" 00:24:34.037 }, 00:24:34.037 "ns_data": { 00:24:34.037 "id": 1, 00:24:34.037 "can_share": false 00:24:34.037 } 00:24:34.037 } 00:24:34.037 ], 00:24:34.037 "mp_policy": "active_passive" 00:24:34.037 } 00:24:34.037 } 00:24:34.037 ]' 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:34.037 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:34.295 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3be2cb50-46ca-495b-b6ab-577c42500e90 00:24:34.295 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:34.295 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3be2cb50-46ca-495b-b6ab-577c42500e90 00:24:34.295 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:34.553 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c 00:24:34.553 22:20:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:34.811 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:34.812 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:34.812 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.070 { 00:24:35.070 "name": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:35.070 "aliases": [ 00:24:35.070 "lvs/nvme0n1p0" 00:24:35.070 ], 00:24:35.070 "product_name": "Logical Volume", 00:24:35.070 "block_size": 4096, 00:24:35.070 "num_blocks": 26476544, 00:24:35.070 "uuid": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:35.070 "assigned_rate_limits": { 00:24:35.070 "rw_ios_per_sec": 0, 00:24:35.070 "rw_mbytes_per_sec": 0, 00:24:35.070 "r_mbytes_per_sec": 0, 00:24:35.070 "w_mbytes_per_sec": 0 00:24:35.070 }, 00:24:35.070 "claimed": false, 00:24:35.070 "zoned": false, 00:24:35.070 "supported_io_types": { 00:24:35.070 "read": true, 00:24:35.070 "write": true, 00:24:35.070 "unmap": true, 00:24:35.070 "flush": false, 00:24:35.070 "reset": true, 00:24:35.070 "nvme_admin": false, 00:24:35.070 "nvme_io": false, 00:24:35.070 "nvme_io_md": false, 00:24:35.070 "write_zeroes": true, 00:24:35.070 "zcopy": false, 00:24:35.070 "get_zone_info": false, 00:24:35.070 "zone_management": false, 00:24:35.070 "zone_append": false, 00:24:35.070 "compare": false, 00:24:35.070 "compare_and_write": false, 00:24:35.070 "abort": false, 00:24:35.070 "seek_hole": true, 00:24:35.070 "seek_data": true, 00:24:35.070 "copy": false, 00:24:35.070 "nvme_iov_md": false 00:24:35.070 }, 00:24:35.070 "driver_specific": { 00:24:35.070 "lvol": { 00:24:35.070 "lvol_store_uuid": "420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c", 00:24:35.070 "base_bdev": "nvme0n1", 00:24:35.070 "thin_provision": true, 00:24:35.070 "num_allocated_clusters": 0, 00:24:35.070 "snapshot": false, 00:24:35.070 "clone": false, 00:24:35.070 "esnap_clone": false 00:24:35.070 } 00:24:35.070 } 00:24:35.070 } 00:24:35.070 ]' 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:35.070 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:35.328 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:35.328 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:35.328 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.329 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.329 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:35.329 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:35.329 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:35.329 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.587 { 00:24:35.587 "name": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:35.587 "aliases": [ 00:24:35.587 "lvs/nvme0n1p0" 00:24:35.587 ], 00:24:35.587 "product_name": "Logical Volume", 00:24:35.587 "block_size": 4096, 00:24:35.587 "num_blocks": 26476544, 00:24:35.587 "uuid": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:35.587 "assigned_rate_limits": { 00:24:35.587 "rw_ios_per_sec": 0, 00:24:35.587 "rw_mbytes_per_sec": 0, 00:24:35.587 "r_mbytes_per_sec": 0, 00:24:35.587 "w_mbytes_per_sec": 0 00:24:35.587 }, 00:24:35.587 "claimed": false, 00:24:35.587 "zoned": false, 00:24:35.587 "supported_io_types": { 00:24:35.587 "read": true, 00:24:35.587 "write": true, 00:24:35.587 "unmap": true, 00:24:35.587 "flush": false, 00:24:35.587 "reset": true, 00:24:35.587 "nvme_admin": false, 00:24:35.587 "nvme_io": false, 00:24:35.587 "nvme_io_md": false, 00:24:35.587 "write_zeroes": true, 00:24:35.587 "zcopy": false, 00:24:35.587 "get_zone_info": false, 00:24:35.587 "zone_management": false, 00:24:35.587 "zone_append": false, 00:24:35.587 "compare": false, 00:24:35.587 "compare_and_write": false, 00:24:35.587 "abort": false, 00:24:35.587 "seek_hole": true, 00:24:35.587 "seek_data": true, 00:24:35.587 "copy": false, 00:24:35.587 "nvme_iov_md": false 00:24:35.587 }, 00:24:35.587 "driver_specific": { 00:24:35.587 "lvol": { 00:24:35.587 "lvol_store_uuid": "420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c", 00:24:35.587 "base_bdev": "nvme0n1", 00:24:35.587 "thin_provision": true, 00:24:35.587 "num_allocated_clusters": 0, 00:24:35.587 "snapshot": false, 00:24:35.587 "clone": false, 00:24:35.587 "esnap_clone": false 00:24:35.587 } 00:24:35.587 } 00:24:35.587 } 00:24:35.587 ]' 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:35.587 22:20:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:35.845 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:36.104 { 00:24:36.104 "name": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:36.104 "aliases": [ 00:24:36.104 "lvs/nvme0n1p0" 00:24:36.104 ], 00:24:36.104 "product_name": "Logical Volume", 00:24:36.104 "block_size": 4096, 00:24:36.104 "num_blocks": 26476544, 00:24:36.104 "uuid": "eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b", 00:24:36.104 "assigned_rate_limits": { 00:24:36.104 "rw_ios_per_sec": 0, 00:24:36.104 "rw_mbytes_per_sec": 0, 00:24:36.104 "r_mbytes_per_sec": 0, 00:24:36.104 "w_mbytes_per_sec": 0 00:24:36.104 }, 00:24:36.104 "claimed": false, 00:24:36.104 "zoned": false, 00:24:36.104 "supported_io_types": { 00:24:36.104 "read": true, 00:24:36.104 "write": true, 00:24:36.104 "unmap": true, 00:24:36.104 "flush": false, 00:24:36.104 "reset": true, 00:24:36.104 "nvme_admin": false, 00:24:36.104 "nvme_io": false, 00:24:36.104 "nvme_io_md": false, 00:24:36.104 "write_zeroes": true, 00:24:36.104 "zcopy": false, 00:24:36.104 "get_zone_info": false, 00:24:36.104 "zone_management": false, 00:24:36.104 "zone_append": false, 00:24:36.104 "compare": false, 00:24:36.104 "compare_and_write": false, 00:24:36.104 "abort": false, 00:24:36.104 "seek_hole": true, 00:24:36.104 "seek_data": true, 00:24:36.104 "copy": false, 00:24:36.104 "nvme_iov_md": false 00:24:36.104 }, 00:24:36.104 "driver_specific": { 00:24:36.104 "lvol": { 00:24:36.104 "lvol_store_uuid": "420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c", 00:24:36.104 "base_bdev": "nvme0n1", 00:24:36.104 "thin_provision": true, 00:24:36.104 "num_allocated_clusters": 0, 00:24:36.104 "snapshot": false, 00:24:36.104 "clone": false, 00:24:36.104 "esnap_clone": false 00:24:36.104 } 00:24:36.104 } 00:24:36.104 } 00:24:36.104 ]' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b --l2p_dram_limit 10' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:36.104 22:20:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d eeedcce1-0f50-4d3d-9ac9-f5a97bd2511b --l2p_dram_limit 10 -c nvc0n1p0 00:24:36.364 [2024-12-16 22:20:42.485618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.485659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:36.364 [2024-12-16 22:20:42.485673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.364 [2024-12-16 22:20:42.485681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.485725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.485735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.364 [2024-12-16 22:20:42.485742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:36.364 [2024-12-16 22:20:42.485751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.485768] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:36.364 [2024-12-16 22:20:42.486004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:36.364 [2024-12-16 22:20:42.486017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.486027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.364 [2024-12-16 22:20:42.486033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:24:36.364 [2024-12-16 22:20:42.486040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.486063] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ee7c4cb9-b627-4ec2-9321-998c9eed227b 00:24:36.364 [2024-12-16 22:20:42.487031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.487055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:36.364 [2024-12-16 22:20:42.487064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:36.364 [2024-12-16 22:20:42.487070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.491700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.491726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.364 [2024-12-16 22:20:42.491735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:24:36.364 [2024-12-16 22:20:42.491741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.491802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.491808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.364 [2024-12-16 22:20:42.491816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:36.364 [2024-12-16 22:20:42.491822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.491878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.491886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:36.364 [2024-12-16 22:20:42.491894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:36.364 [2024-12-16 22:20:42.491899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.491921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.364 [2024-12-16 22:20:42.493189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.493214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.364 [2024-12-16 22:20:42.493225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:24:36.364 [2024-12-16 22:20:42.493232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.493258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.493266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:36.364 [2024-12-16 22:20:42.493272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:36.364 [2024-12-16 22:20:42.493281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.364 [2024-12-16 22:20:42.493298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:36.364 [2024-12-16 22:20:42.493418] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:36.364 [2024-12-16 22:20:42.493427] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:36.364 [2024-12-16 22:20:42.493437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:36.364 [2024-12-16 22:20:42.493447] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:36.364 [2024-12-16 22:20:42.493458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:36.364 [2024-12-16 22:20:42.493464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:36.364 [2024-12-16 22:20:42.493472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:36.364 [2024-12-16 22:20:42.493477] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:36.364 [2024-12-16 22:20:42.493484] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:36.364 [2024-12-16 22:20:42.493489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.364 [2024-12-16 22:20:42.493497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:36.364 [2024-12-16 22:20:42.493502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:24:36.364 [2024-12-16 22:20:42.493509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.365 [2024-12-16 22:20:42.493574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.365 [2024-12-16 22:20:42.493583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:36.365 [2024-12-16 22:20:42.493589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:36.365 [2024-12-16 22:20:42.493601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.365 [2024-12-16 22:20:42.493672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:36.365 [2024-12-16 22:20:42.493680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:36.365 [2024-12-16 22:20:42.493686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:36.365 [2024-12-16 22:20:42.493705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:36.365 [2024-12-16 22:20:42.493722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.365 [2024-12-16 22:20:42.493733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:36.365 [2024-12-16 22:20:42.493740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:36.365 [2024-12-16 22:20:42.493745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.365 [2024-12-16 22:20:42.493752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:36.365 [2024-12-16 22:20:42.493757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:36.365 [2024-12-16 22:20:42.493764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:36.365 [2024-12-16 22:20:42.493779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:36.365 [2024-12-16 22:20:42.493795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:36.365 [2024-12-16 22:20:42.493812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:36.365 [2024-12-16 22:20:42.493830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:36.365 [2024-12-16 22:20:42.493865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:36.365 [2024-12-16 22:20:42.493885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.365 [2024-12-16 22:20:42.493897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:36.365 [2024-12-16 22:20:42.493904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:36.365 [2024-12-16 22:20:42.493909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.365 [2024-12-16 22:20:42.493917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:36.365 [2024-12-16 22:20:42.493922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:36.365 [2024-12-16 22:20:42.493930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:36.365 [2024-12-16 22:20:42.493943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:36.365 [2024-12-16 22:20:42.493948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:36.365 [2024-12-16 22:20:42.493962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:36.365 [2024-12-16 22:20:42.493970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.365 [2024-12-16 22:20:42.493977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.365 [2024-12-16 22:20:42.493985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:36.365 [2024-12-16 22:20:42.493994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:36.365 [2024-12-16 22:20:42.494002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:36.365 [2024-12-16 22:20:42.494007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:36.365 [2024-12-16 22:20:42.494015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:36.365 [2024-12-16 22:20:42.494021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:36.365 [2024-12-16 22:20:42.494030] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:36.365 [2024-12-16 22:20:42.494039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:36.365 [2024-12-16 22:20:42.494054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:36.365 [2024-12-16 22:20:42.494062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:36.365 [2024-12-16 22:20:42.494068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:36.365 [2024-12-16 22:20:42.494075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:36.365 [2024-12-16 22:20:42.494081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:36.365 [2024-12-16 22:20:42.494090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:36.365 [2024-12-16 22:20:42.494096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:36.365 [2024-12-16 22:20:42.494104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:36.365 [2024-12-16 22:20:42.494110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:36.365 [2024-12-16 22:20:42.494144] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:36.365 [2024-12-16 22:20:42.494151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.365 [2024-12-16 22:20:42.494166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:36.365 [2024-12-16 22:20:42.494173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:36.365 [2024-12-16 22:20:42.494179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:36.365 [2024-12-16 22:20:42.494187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.365 [2024-12-16 22:20:42.494193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:36.365 [2024-12-16 22:20:42.494203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:24:36.365 [2024-12-16 22:20:42.494219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.365 [2024-12-16 22:20:42.494248] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:36.365 [2024-12-16 22:20:42.494257] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:39.651 [2024-12-16 22:20:45.428272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.428452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:39.651 [2024-12-16 22:20:45.428477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2934.009 ms 00:24:39.651 [2024-12-16 22:20:45.428487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.436757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.436796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.651 [2024-12-16 22:20:45.436811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.190 ms 00:24:39.651 [2024-12-16 22:20:45.436819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.436937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.436947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:39.651 [2024-12-16 22:20:45.436957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:39.651 [2024-12-16 22:20:45.436964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.445434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.445467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.651 [2024-12-16 22:20:45.445479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.423 ms 00:24:39.651 [2024-12-16 22:20:45.445493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.445523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.445531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.651 [2024-12-16 22:20:45.445540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:39.651 [2024-12-16 22:20:45.445547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.445892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.445913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.651 [2024-12-16 22:20:45.445924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:24:39.651 [2024-12-16 22:20:45.445932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.446036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.446045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.651 [2024-12-16 22:20:45.446055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:39.651 [2024-12-16 22:20:45.446062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.451361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.451490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.651 [2024-12-16 22:20:45.451511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.274 ms 00:24:39.651 [2024-12-16 22:20:45.451519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.472574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:39.651 [2024-12-16 22:20:45.475748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.475789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:39.651 [2024-12-16 22:20:45.475810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:24:39.651 [2024-12-16 22:20:45.475825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.537207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.537334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:39.651 [2024-12-16 22:20:45.537379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.320 ms 00:24:39.651 [2024-12-16 22:20:45.537412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.537958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.538010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:39.651 [2024-12-16 22:20:45.538040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:24:39.651 [2024-12-16 22:20:45.538082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.544037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.544074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:39.651 [2024-12-16 22:20:45.544086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.797 ms 00:24:39.651 [2024-12-16 22:20:45.544096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.547399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.547435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:39.651 [2024-12-16 22:20:45.547445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:24:39.651 [2024-12-16 22:20:45.547453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.547743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.547753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:39.651 [2024-12-16 22:20:45.547769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:39.651 [2024-12-16 22:20:45.547780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.581339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.581377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:39.651 [2024-12-16 22:20:45.581389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.531 ms 00:24:39.651 [2024-12-16 22:20:45.581399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.586029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.586162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:39.651 [2024-12-16 22:20:45.586178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:24:39.651 [2024-12-16 22:20:45.586187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.589969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.590004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:39.651 [2024-12-16 22:20:45.590012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:24:39.651 [2024-12-16 22:20:45.590021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.594463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.594498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:39.651 [2024-12-16 22:20:45.594507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:24:39.651 [2024-12-16 22:20:45.594518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.594553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.594570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:39.651 [2024-12-16 22:20:45.594578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:39.651 [2024-12-16 22:20:45.594587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.594646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.651 [2024-12-16 22:20:45.594656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:39.651 [2024-12-16 22:20:45.594664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:39.651 [2024-12-16 22:20:45.594675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.651 [2024-12-16 22:20:45.595496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3109.492 ms, result 0 00:24:39.651 { 00:24:39.651 "name": "ftl0", 00:24:39.651 "uuid": "ee7c4cb9-b627-4ec2-9321-998c9eed227b" 00:24:39.651 } 00:24:39.651 22:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:39.651 22:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:39.651 22:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:39.651 22:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:39.651 22:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:39.910 /dev/nbd0 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:39.910 1+0 records in 00:24:39.910 1+0 records out 00:24:39.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190217 s, 21.5 MB/s 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:39.910 22:20:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:39.910 [2024-12-16 22:20:46.197278] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:39.910 [2024-12-16 22:20:46.197391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92950 ] 00:24:40.170 [2024-12-16 22:20:46.352575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.170 [2024-12-16 22:20:46.372202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.113  [2024-12-16T22:20:48.846Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-16T22:20:49.791Z] Copying: 382/1024 [MB] (190 MBps) [2024-12-16T22:20:50.736Z] Copying: 576/1024 [MB] (193 MBps) [2024-12-16T22:20:51.670Z] Copying: 765/1024 [MB] (189 MBps) [2024-12-16T22:20:51.670Z] Copying: 986/1024 [MB] (220 MBps) [2024-12-16T22:20:51.930Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:24:45.583 00:24:45.583 22:20:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:48.129 22:20:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:48.129 [2024-12-16 22:20:53.992997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:48.129 [2024-12-16 22:20:53.993120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93048 ] 00:24:48.129 [2024-12-16 22:20:54.149647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.129 [2024-12-16 22:20:54.173769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.072  [2024-12-16T22:20:56.357Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-16T22:20:57.291Z] Copying: 34/1024 [MB] (21 MBps) [2024-12-16T22:20:58.669Z] Copying: 57/1024 [MB] (22 MBps) [2024-12-16T22:20:59.606Z] Copying: 85/1024 [MB] (28 MBps) [2024-12-16T22:21:00.542Z] Copying: 113/1024 [MB] (27 MBps) [2024-12-16T22:21:01.476Z] Copying: 142/1024 [MB] (29 MBps) [2024-12-16T22:21:02.483Z] Copying: 171/1024 [MB] (28 MBps) [2024-12-16T22:21:03.427Z] Copying: 186/1024 [MB] (15 MBps) [2024-12-16T22:21:04.370Z] Copying: 206/1024 [MB] (19 MBps) [2024-12-16T22:21:05.304Z] Copying: 221192/1048576 [kB] (9788 kBps) [2024-12-16T22:21:06.677Z] Copying: 227/1024 [MB] (11 MBps) [2024-12-16T22:21:07.243Z] Copying: 241/1024 [MB] (13 MBps) [2024-12-16T22:21:08.617Z] Copying: 258/1024 [MB] (16 MBps) [2024-12-16T22:21:09.551Z] Copying: 275/1024 [MB] (16 MBps) [2024-12-16T22:21:10.484Z] Copying: 289/1024 [MB] (14 MBps) [2024-12-16T22:21:11.418Z] Copying: 307/1024 [MB] (17 MBps) [2024-12-16T22:21:12.353Z] Copying: 320/1024 [MB] (12 MBps) [2024-12-16T22:21:13.287Z] Copying: 333/1024 [MB] (13 MBps) [2024-12-16T22:21:14.661Z] Copying: 354/1024 [MB] (20 MBps) [2024-12-16T22:21:15.606Z] Copying: 370/1024 [MB] (16 MBps) [2024-12-16T22:21:16.540Z] Copying: 388/1024 [MB] (18 MBps) [2024-12-16T22:21:17.474Z] Copying: 407/1024 [MB] (18 MBps) [2024-12-16T22:21:18.409Z] Copying: 426/1024 [MB] (18 MBps) [2024-12-16T22:21:19.344Z] Copying: 442/1024 [MB] (15 MBps) [2024-12-16T22:21:20.277Z] Copying: 460/1024 [MB] (18 MBps) [2024-12-16T22:21:21.649Z] Copying: 476/1024 [MB] (15 MBps) [2024-12-16T22:21:22.584Z] Copying: 491/1024 [MB] (15 MBps) [2024-12-16T22:21:23.517Z] Copying: 510/1024 [MB] (18 MBps) [2024-12-16T22:21:24.451Z] Copying: 530/1024 [MB] (20 MBps) [2024-12-16T22:21:25.385Z] Copying: 547/1024 [MB] (16 MBps) [2024-12-16T22:21:26.350Z] Copying: 568/1024 [MB] (21 MBps) [2024-12-16T22:21:27.282Z] Copying: 587/1024 [MB] (19 MBps) [2024-12-16T22:21:28.655Z] Copying: 604/1024 [MB] (17 MBps) [2024-12-16T22:21:29.591Z] Copying: 629/1024 [MB] (25 MBps) [2024-12-16T22:21:30.526Z] Copying: 648/1024 [MB] (18 MBps) [2024-12-16T22:21:31.460Z] Copying: 667/1024 [MB] (19 MBps) [2024-12-16T22:21:32.394Z] Copying: 688/1024 [MB] (20 MBps) [2024-12-16T22:21:33.329Z] Copying: 706/1024 [MB] (18 MBps) [2024-12-16T22:21:34.263Z] Copying: 726/1024 [MB] (19 MBps) [2024-12-16T22:21:35.650Z] Copying: 745/1024 [MB] (19 MBps) [2024-12-16T22:21:36.583Z] Copying: 765/1024 [MB] (20 MBps) [2024-12-16T22:21:37.517Z] Copying: 784/1024 [MB] (18 MBps) [2024-12-16T22:21:38.451Z] Copying: 802/1024 [MB] (17 MBps) [2024-12-16T22:21:39.385Z] Copying: 820/1024 [MB] (18 MBps) [2024-12-16T22:21:40.320Z] Copying: 839/1024 [MB] (19 MBps) [2024-12-16T22:21:41.254Z] Copying: 857/1024 [MB] (17 MBps) [2024-12-16T22:21:42.626Z] Copying: 874/1024 [MB] (17 MBps) [2024-12-16T22:21:43.558Z] Copying: 894/1024 [MB] (20 MBps) [2024-12-16T22:21:44.492Z] Copying: 907/1024 [MB] (13 MBps) [2024-12-16T22:21:45.427Z] Copying: 920/1024 [MB] (12 MBps) [2024-12-16T22:21:46.360Z] Copying: 936/1024 [MB] (16 MBps) [2024-12-16T22:21:47.295Z] Copying: 953/1024 [MB] (16 MBps) [2024-12-16T22:21:48.258Z] Copying: 972/1024 [MB] (18 MBps) [2024-12-16T22:21:49.634Z] Copying: 999/1024 [MB] (27 MBps) [2024-12-16T22:21:49.634Z] Copying: 1019/1024 [MB] (19 MBps) [2024-12-16T22:21:49.893Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:25:43.546 00:25:43.546 22:21:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:43.546 22:21:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:43.806 22:21:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:44.069 [2024-12-16 22:21:50.225144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.225344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.069 [2024-12-16 22:21:50.225373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.069 [2024-12-16 22:21:50.225384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.225421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.069 [2024-12-16 22:21:50.226172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.226204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.069 [2024-12-16 22:21:50.226216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:25:44.069 [2024-12-16 22:21:50.226228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.228693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.228876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.069 [2024-12-16 22:21:50.228898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.434 ms 00:25:44.069 [2024-12-16 22:21:50.228913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.246618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.246672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.069 [2024-12-16 22:21:50.246688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.680 ms 00:25:44.069 [2024-12-16 22:21:50.246698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.252850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.252896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.069 [2024-12-16 22:21:50.252907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:25:44.069 [2024-12-16 22:21:50.252918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.255602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.255658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.069 [2024-12-16 22:21:50.255669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:25:44.069 [2024-12-16 22:21:50.255678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.262040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.262095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.069 [2024-12-16 22:21:50.262107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:25:44.069 [2024-12-16 22:21:50.262119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.262255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.262269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.069 [2024-12-16 22:21:50.262279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:44.069 [2024-12-16 22:21:50.262293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.265663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.265713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.069 [2024-12-16 22:21:50.265723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:25:44.069 [2024-12-16 22:21:50.265731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.268173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.268216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.069 [2024-12-16 22:21:50.268225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:25:44.069 [2024-12-16 22:21:50.268234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.270693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.270742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.069 [2024-12-16 22:21:50.270753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.413 ms 00:25:44.069 [2024-12-16 22:21:50.270762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.273020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.069 [2024-12-16 22:21:50.273077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.069 [2024-12-16 22:21:50.273087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:25:44.069 [2024-12-16 22:21:50.273097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.069 [2024-12-16 22:21:50.273138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.069 [2024-12-16 22:21:50.273157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.069 [2024-12-16 22:21:50.273502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.273996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.070 [2024-12-16 22:21:50.274065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.070 [2024-12-16 22:21:50.274080] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee7c4cb9-b627-4ec2-9321-998c9eed227b 00:25:44.070 [2024-12-16 22:21:50.274094] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:44.070 [2024-12-16 22:21:50.274101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:44.070 [2024-12-16 22:21:50.274111] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:44.070 [2024-12-16 22:21:50.274119] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:44.070 [2024-12-16 22:21:50.274128] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.070 [2024-12-16 22:21:50.274135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.070 [2024-12-16 22:21:50.274146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.070 [2024-12-16 22:21:50.274154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.070 [2024-12-16 22:21:50.274162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.070 [2024-12-16 22:21:50.274169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.070 [2024-12-16 22:21:50.274178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.070 [2024-12-16 22:21:50.274192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:25:44.070 [2024-12-16 22:21:50.274202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.276485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.070 [2024-12-16 22:21:50.276650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.070 [2024-12-16 22:21:50.276668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:25:44.070 [2024-12-16 22:21:50.276679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.276816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.070 [2024-12-16 22:21:50.276830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.070 [2024-12-16 22:21:50.276864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:44.070 [2024-12-16 22:21:50.276873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.284694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.070 [2024-12-16 22:21:50.284731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.070 [2024-12-16 22:21:50.284741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.070 [2024-12-16 22:21:50.284752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.284814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.070 [2024-12-16 22:21:50.284829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.070 [2024-12-16 22:21:50.284859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.070 [2024-12-16 22:21:50.284869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.284949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.070 [2024-12-16 22:21:50.284964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.070 [2024-12-16 22:21:50.284973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.070 [2024-12-16 22:21:50.284982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.070 [2024-12-16 22:21:50.285000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.070 [2024-12-16 22:21:50.285010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.070 [2024-12-16 22:21:50.285025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.070 [2024-12-16 22:21:50.285034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.300155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.300219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.071 [2024-12-16 22:21:50.300231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.300241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.071 [2024-12-16 22:21:50.312272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.071 [2024-12-16 22:21:50.312392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.071 [2024-12-16 22:21:50.312473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.071 [2024-12-16 22:21:50.312582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:44.071 [2024-12-16 22:21:50.312647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.071 [2024-12-16 22:21:50.312723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.071 [2024-12-16 22:21:50.312792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.071 [2024-12-16 22:21:50.312801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.071 [2024-12-16 22:21:50.312815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.071 [2024-12-16 22:21:50.312985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.795 ms, result 0 00:25:44.071 true 00:25:44.071 22:21:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92819 00:25:44.071 22:21:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92819 00:25:44.071 22:21:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:44.071 [2024-12-16 22:21:50.411077] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:44.071 [2024-12-16 22:21:50.411227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93655 ] 00:25:44.332 [2024-12-16 22:21:50.573455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.332 [2024-12-16 22:21:50.601788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.718  [2024-12-16T22:21:53.007Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-16T22:21:53.951Z] Copying: 453/1024 [MB] (260 MBps) [2024-12-16T22:21:54.895Z] Copying: 713/1024 [MB] (259 MBps) [2024-12-16T22:21:54.895Z] Copying: 972/1024 [MB] (259 MBps) [2024-12-16T22:21:55.157Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:25:48.810 00:25:48.810 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92819 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:48.810 22:21:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:48.810 [2024-12-16 22:21:55.086292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:48.810 [2024-12-16 22:21:55.086438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93708 ] 00:25:49.070 [2024-12-16 22:21:55.240578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.070 [2024-12-16 22:21:55.258671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.070 [2024-12-16 22:21:55.341219] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.070 [2024-12-16 22:21:55.341278] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.070 [2024-12-16 22:21:55.402964] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:49.070 [2024-12-16 22:21:55.403267] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:49.070 [2024-12-16 22:21:55.403428] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:49.332 [2024-12-16 22:21:55.585477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.585516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:49.332 [2024-12-16 22:21:55.585526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:49.332 [2024-12-16 22:21:55.585532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.585568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.585576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:49.332 [2024-12-16 22:21:55.585582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:49.332 [2024-12-16 22:21:55.585587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.585600] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:49.332 [2024-12-16 22:21:55.585861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:49.332 [2024-12-16 22:21:55.585878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.585888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:49.332 [2024-12-16 22:21:55.585899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:25:49.332 [2024-12-16 22:21:55.585905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.586858] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:49.332 [2024-12-16 22:21:55.588597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.588626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:49.332 [2024-12-16 22:21:55.588638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:25:49.332 [2024-12-16 22:21:55.588647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.588695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.588705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:49.332 [2024-12-16 22:21:55.588713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:49.332 [2024-12-16 22:21:55.588721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.592961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.592987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:49.332 [2024-12-16 22:21:55.592995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:25:49.332 [2024-12-16 22:21:55.593001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.593063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.593070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:49.332 [2024-12-16 22:21:55.593078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:49.332 [2024-12-16 22:21:55.593086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.332 [2024-12-16 22:21:55.593119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.332 [2024-12-16 22:21:55.593129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:49.333 [2024-12-16 22:21:55.593138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:49.333 [2024-12-16 22:21:55.593143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.333 [2024-12-16 22:21:55.593158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:49.333 [2024-12-16 22:21:55.594272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.333 [2024-12-16 22:21:55.594294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:49.333 [2024-12-16 22:21:55.594301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:25:49.333 [2024-12-16 22:21:55.594317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.333 [2024-12-16 22:21:55.594340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.333 [2024-12-16 22:21:55.594347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:49.333 [2024-12-16 22:21:55.594356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:49.333 [2024-12-16 22:21:55.594361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.333 [2024-12-16 22:21:55.594375] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:49.333 [2024-12-16 22:21:55.594390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:49.333 [2024-12-16 22:21:55.594417] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:49.333 [2024-12-16 22:21:55.594430] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:49.333 [2024-12-16 22:21:55.594509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:49.333 [2024-12-16 22:21:55.594521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:49.333 [2024-12-16 22:21:55.594529] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:49.333 [2024-12-16 22:21:55.594537] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594546] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:49.333 [2024-12-16 22:21:55.594558] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:49.333 [2024-12-16 22:21:55.594564] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:49.333 [2024-12-16 22:21:55.594572] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:49.333 [2024-12-16 22:21:55.594579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.333 [2024-12-16 22:21:55.594585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:49.333 [2024-12-16 22:21:55.594591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:25:49.333 [2024-12-16 22:21:55.594596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.333 [2024-12-16 22:21:55.594661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.333 [2024-12-16 22:21:55.594669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:49.333 [2024-12-16 22:21:55.594675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:49.333 [2024-12-16 22:21:55.594687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.333 [2024-12-16 22:21:55.594763] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:49.333 [2024-12-16 22:21:55.594771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:49.333 [2024-12-16 22:21:55.594777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:49.333 [2024-12-16 22:21:55.594794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:49.333 [2024-12-16 22:21:55.594809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.333 [2024-12-16 22:21:55.594819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:49.333 [2024-12-16 22:21:55.594827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:49.333 [2024-12-16 22:21:55.594832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:49.333 [2024-12-16 22:21:55.594854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:49.333 [2024-12-16 22:21:55.594860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:49.333 [2024-12-16 22:21:55.594866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:49.333 [2024-12-16 22:21:55.594876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:49.333 [2024-12-16 22:21:55.594891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:49.333 [2024-12-16 22:21:55.594906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:49.333 [2024-12-16 22:21:55.594921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:49.333 [2024-12-16 22:21:55.594944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:49.333 [2024-12-16 22:21:55.594956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:49.333 [2024-12-16 22:21:55.594961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:49.333 [2024-12-16 22:21:55.594967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.333 [2024-12-16 22:21:55.594973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:49.333 [2024-12-16 22:21:55.594978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:49.333 [2024-12-16 22:21:55.594983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:49.333 [2024-12-16 22:21:55.594989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:49.333 [2024-12-16 22:21:55.594995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:49.333 [2024-12-16 22:21:55.595000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.595006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:49.333 [2024-12-16 22:21:55.595011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:49.333 [2024-12-16 22:21:55.595017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.595025] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:49.333 [2024-12-16 22:21:55.595034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:49.333 [2024-12-16 22:21:55.595042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:49.333 [2024-12-16 22:21:55.595048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:49.333 [2024-12-16 22:21:55.595054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:49.333 [2024-12-16 22:21:55.595060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:49.333 [2024-12-16 22:21:55.595066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:49.333 [2024-12-16 22:21:55.595071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:49.333 [2024-12-16 22:21:55.595077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:49.333 [2024-12-16 22:21:55.595083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:49.333 [2024-12-16 22:21:55.595090] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:49.333 [2024-12-16 22:21:55.595098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.333 [2024-12-16 22:21:55.595106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:49.333 [2024-12-16 22:21:55.595112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:49.333 [2024-12-16 22:21:55.595118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:49.333 [2024-12-16 22:21:55.595124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:49.333 [2024-12-16 22:21:55.595130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:49.333 [2024-12-16 22:21:55.595136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:49.333 [2024-12-16 22:21:55.595144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:49.333 [2024-12-16 22:21:55.595151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:49.333 [2024-12-16 22:21:55.595157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:49.333 [2024-12-16 22:21:55.595163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:49.333 [2024-12-16 22:21:55.595169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:49.333 [2024-12-16 22:21:55.595175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:49.333 [2024-12-16 22:21:55.595181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:49.333 [2024-12-16 22:21:55.595187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:49.333 [2024-12-16 22:21:55.595193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:49.334 [2024-12-16 22:21:55.595204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:49.334 [2024-12-16 22:21:55.595210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:49.334 [2024-12-16 22:21:55.595217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:49.334 [2024-12-16 22:21:55.595223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:49.334 [2024-12-16 22:21:55.595229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:49.334 [2024-12-16 22:21:55.595237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.595244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:49.334 [2024-12-16 22:21:55.595254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:25:49.334 [2024-12-16 22:21:55.595261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.602832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.602870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:49.334 [2024-12-16 22:21:55.602877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.538 ms 00:25:49.334 [2024-12-16 22:21:55.602886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.602949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.602955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:49.334 [2024-12-16 22:21:55.602964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:49.334 [2024-12-16 22:21:55.602969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.621263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.621302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:49.334 [2024-12-16 22:21:55.621314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.260 ms 00:25:49.334 [2024-12-16 22:21:55.621322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.621355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.621365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:49.334 [2024-12-16 22:21:55.621373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:49.334 [2024-12-16 22:21:55.621379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.621724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.621752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:49.334 [2024-12-16 22:21:55.621762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:49.334 [2024-12-16 22:21:55.621769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.621896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.621916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:49.334 [2024-12-16 22:21:55.621925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:49.334 [2024-12-16 22:21:55.621933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.627141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.627177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:49.334 [2024-12-16 22:21:55.627188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:25:49.334 [2024-12-16 22:21:55.627197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.629423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:49.334 [2024-12-16 22:21:55.629461] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:49.334 [2024-12-16 22:21:55.629474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.629486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:49.334 [2024-12-16 22:21:55.629496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:25:49.334 [2024-12-16 22:21:55.629505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.642059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.642086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:49.334 [2024-12-16 22:21:55.642096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:25:49.334 [2024-12-16 22:21:55.642102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.643568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.643594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:49.334 [2024-12-16 22:21:55.643601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:25:49.334 [2024-12-16 22:21:55.643606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.644737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.644762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:49.334 [2024-12-16 22:21:55.644769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:25:49.334 [2024-12-16 22:21:55.644774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.645027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.645046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:49.334 [2024-12-16 22:21:55.645054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:25:49.334 [2024-12-16 22:21:55.645059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.658351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.658388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:49.334 [2024-12-16 22:21:55.658397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.280 ms 00:25:49.334 [2024-12-16 22:21:55.658404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.664199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:49.334 [2024-12-16 22:21:55.666046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.666071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:49.334 [2024-12-16 22:21:55.666080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.613 ms 00:25:49.334 [2024-12-16 22:21:55.666086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.666127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.666136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:49.334 [2024-12-16 22:21:55.666147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:49.334 [2024-12-16 22:21:55.666157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.666206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.666217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:49.334 [2024-12-16 22:21:55.666224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:49.334 [2024-12-16 22:21:55.666233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.666247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.666255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:49.334 [2024-12-16 22:21:55.666261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:49.334 [2024-12-16 22:21:55.666269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.666295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:49.334 [2024-12-16 22:21:55.666302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.666309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:49.334 [2024-12-16 22:21:55.666322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:49.334 [2024-12-16 22:21:55.666328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.669137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.669165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:49.334 [2024-12-16 22:21:55.669172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:25:49.334 [2024-12-16 22:21:55.669179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.669231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.334 [2024-12-16 22:21:55.669239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:49.334 [2024-12-16 22:21:55.669245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:49.334 [2024-12-16 22:21:55.669251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.334 [2024-12-16 22:21:55.670007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.221 ms, result 0 00:25:50.719  [2024-12-16T22:21:58.007Z] Copying: 30/1024 [MB] (30 MBps) [2024-12-16T22:21:58.950Z] Copying: 50/1024 [MB] (19 MBps) [2024-12-16T22:21:59.894Z] Copying: 63/1024 [MB] (13 MBps) [2024-12-16T22:22:00.837Z] Copying: 82/1024 [MB] (18 MBps) [2024-12-16T22:22:01.780Z] Copying: 100/1024 [MB] (17 MBps) [2024-12-16T22:22:02.722Z] Copying: 119/1024 [MB] (18 MBps) [2024-12-16T22:22:04.109Z] Copying: 130/1024 [MB] (11 MBps) [2024-12-16T22:22:04.682Z] Copying: 146/1024 [MB] (15 MBps) [2024-12-16T22:22:06.070Z] Copying: 161/1024 [MB] (15 MBps) [2024-12-16T22:22:07.015Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-16T22:22:07.958Z] Copying: 187/1024 [MB] (14 MBps) [2024-12-16T22:22:08.902Z] Copying: 213/1024 [MB] (26 MBps) [2024-12-16T22:22:09.846Z] Copying: 233/1024 [MB] (19 MBps) [2024-12-16T22:22:10.790Z] Copying: 260/1024 [MB] (27 MBps) [2024-12-16T22:22:11.799Z] Copying: 292/1024 [MB] (31 MBps) [2024-12-16T22:22:12.742Z] Copying: 311/1024 [MB] (19 MBps) [2024-12-16T22:22:13.685Z] Copying: 329/1024 [MB] (18 MBps) [2024-12-16T22:22:15.072Z] Copying: 343/1024 [MB] (13 MBps) [2024-12-16T22:22:16.013Z] Copying: 357/1024 [MB] (14 MBps) [2024-12-16T22:22:16.956Z] Copying: 377/1024 [MB] (20 MBps) [2024-12-16T22:22:17.901Z] Copying: 414/1024 [MB] (36 MBps) [2024-12-16T22:22:18.843Z] Copying: 437/1024 [MB] (23 MBps) [2024-12-16T22:22:19.787Z] Copying: 453/1024 [MB] (15 MBps) [2024-12-16T22:22:20.731Z] Copying: 471/1024 [MB] (17 MBps) [2024-12-16T22:22:22.117Z] Copying: 486/1024 [MB] (15 MBps) [2024-12-16T22:22:22.690Z] Copying: 507/1024 [MB] (21 MBps) [2024-12-16T22:22:24.077Z] Copying: 526/1024 [MB] (19 MBps) [2024-12-16T22:22:25.021Z] Copying: 540/1024 [MB] (13 MBps) [2024-12-16T22:22:25.970Z] Copying: 556/1024 [MB] (16 MBps) [2024-12-16T22:22:26.914Z] Copying: 572/1024 [MB] (15 MBps) [2024-12-16T22:22:27.858Z] Copying: 586/1024 [MB] (14 MBps) [2024-12-16T22:22:28.801Z] Copying: 600/1024 [MB] (13 MBps) [2024-12-16T22:22:29.745Z] Copying: 612/1024 [MB] (12 MBps) [2024-12-16T22:22:30.689Z] Copying: 623/1024 [MB] (11 MBps) [2024-12-16T22:22:32.075Z] Copying: 638/1024 [MB] (14 MBps) [2024-12-16T22:22:33.017Z] Copying: 653/1024 [MB] (14 MBps) [2024-12-16T22:22:34.022Z] Copying: 663/1024 [MB] (10 MBps) [2024-12-16T22:22:34.965Z] Copying: 673/1024 [MB] (10 MBps) [2024-12-16T22:22:35.904Z] Copying: 683/1024 [MB] (10 MBps) [2024-12-16T22:22:36.848Z] Copying: 717/1024 [MB] (33 MBps) [2024-12-16T22:22:37.792Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-16T22:22:38.735Z] Copying: 738/1024 [MB] (10 MBps) [2024-12-16T22:22:40.122Z] Copying: 748/1024 [MB] (10 MBps) [2024-12-16T22:22:40.694Z] Copying: 758/1024 [MB] (10 MBps) [2024-12-16T22:22:42.080Z] Copying: 768/1024 [MB] (10 MBps) [2024-12-16T22:22:43.024Z] Copying: 779/1024 [MB] (10 MBps) [2024-12-16T22:22:43.968Z] Copying: 789/1024 [MB] (10 MBps) [2024-12-16T22:22:44.912Z] Copying: 800/1024 [MB] (10 MBps) [2024-12-16T22:22:45.853Z] Copying: 811/1024 [MB] (11 MBps) [2024-12-16T22:22:46.797Z] Copying: 825/1024 [MB] (13 MBps) [2024-12-16T22:22:47.740Z] Copying: 840/1024 [MB] (15 MBps) [2024-12-16T22:22:48.681Z] Copying: 858/1024 [MB] (18 MBps) [2024-12-16T22:22:50.068Z] Copying: 879/1024 [MB] (20 MBps) [2024-12-16T22:22:51.010Z] Copying: 897/1024 [MB] (18 MBps) [2024-12-16T22:22:51.951Z] Copying: 909/1024 [MB] (11 MBps) [2024-12-16T22:22:52.893Z] Copying: 954/1024 [MB] (45 MBps) [2024-12-16T22:22:53.837Z] Copying: 984/1024 [MB] (29 MBps) [2024-12-16T22:22:54.780Z] Copying: 998/1024 [MB] (14 MBps) [2024-12-16T22:22:55.721Z] Copying: 1015/1024 [MB] (17 MBps) [2024-12-16T22:22:56.298Z] Copying: 1048244/1048576 [kB] (8188 kBps) [2024-12-16T22:22:56.298Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 22:22:56.042692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.951 [2024-12-16 22:22:56.042778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:49.951 [2024-12-16 22:22:56.042796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:49.951 [2024-12-16 22:22:56.042806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.951 [2024-12-16 22:22:56.046565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:49.951 [2024-12-16 22:22:56.048505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.951 [2024-12-16 22:22:56.048689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:49.951 [2024-12-16 22:22:56.048851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:26:49.951 [2024-12-16 22:22:56.048884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.062757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.062962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:49.952 [2024-12-16 22:22:56.063052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.766 ms 00:26:49.952 [2024-12-16 22:22:56.063085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.087876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.088056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:49.952 [2024-12-16 22:22:56.088128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.732 ms 00:26:49.952 [2024-12-16 22:22:56.088153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.094273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.094443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:49.952 [2024-12-16 22:22:56.094464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:26:49.952 [2024-12-16 22:22:56.094473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.097435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.097492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:49.952 [2024-12-16 22:22:56.097503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:26:49.952 [2024-12-16 22:22:56.097512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.102792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.102864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:49.952 [2024-12-16 22:22:56.102890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 00:26:49.952 [2024-12-16 22:22:56.102899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.241846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.241922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:49.952 [2024-12-16 22:22:56.241935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 138.878 ms 00:26:49.952 [2024-12-16 22:22:56.241944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.244688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.244743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:49.952 [2024-12-16 22:22:56.244754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:26:49.952 [2024-12-16 22:22:56.244762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.247197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.247252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:49.952 [2024-12-16 22:22:56.247262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:26:49.952 [2024-12-16 22:22:56.247269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.249524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.249576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:49.952 [2024-12-16 22:22:56.249586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:26:49.952 [2024-12-16 22:22:56.249593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.251945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.952 [2024-12-16 22:22:56.251995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:49.952 [2024-12-16 22:22:56.252006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:26:49.952 [2024-12-16 22:22:56.252013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.952 [2024-12-16 22:22:56.252055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:49.952 [2024-12-16 22:22:56.252079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:26:49.952 [2024-12-16 22:22:56.252094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:49.952 [2024-12-16 22:22:56.252601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:49.953 [2024-12-16 22:22:56.252907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:49.953 [2024-12-16 22:22:56.252923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee7c4cb9-b627-4ec2-9321-998c9eed227b 00:26:49.953 [2024-12-16 22:22:56.252932] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:26:49.953 [2024-12-16 22:22:56.252940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:26:49.953 [2024-12-16 22:22:56.252955] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:26:49.953 [2024-12-16 22:22:56.252971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:26:49.953 [2024-12-16 22:22:56.252983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:49.953 [2024-12-16 22:22:56.252992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:49.953 [2024-12-16 22:22:56.253001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:49.953 [2024-12-16 22:22:56.253008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:49.953 [2024-12-16 22:22:56.253016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:49.953 [2024-12-16 22:22:56.253023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.953 [2024-12-16 22:22:56.253033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:49.953 [2024-12-16 22:22:56.253044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:26:49.953 [2024-12-16 22:22:56.253051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.255552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.953 [2024-12-16 22:22:56.255596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:49.953 [2024-12-16 22:22:56.255607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:26:49.953 [2024-12-16 22:22:56.255617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.255743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.953 [2024-12-16 22:22:56.255753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:49.953 [2024-12-16 22:22:56.255768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:49.953 [2024-12-16 22:22:56.255776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.263642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.263708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.953 [2024-12-16 22:22:56.263719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.263727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.263795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.263804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.953 [2024-12-16 22:22:56.263813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.263826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.263912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.263923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.953 [2024-12-16 22:22:56.263932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.263941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.263956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.263967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.953 [2024-12-16 22:22:56.263975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.263982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.278913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.278952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.953 [2024-12-16 22:22:56.278964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.278972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.289875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.289921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.953 [2024-12-16 22:22:56.289934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.289942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.289992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.953 [2024-12-16 22:22:56.290010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.290059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.953 [2024-12-16 22:22:56.290083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.290170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.953 [2024-12-16 22:22:56.290190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.290226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:49.953 [2024-12-16 22:22:56.290245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.290297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.953 [2024-12-16 22:22:56.290315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.953 [2024-12-16 22:22:56.290369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.953 [2024-12-16 22:22:56.290412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.953 [2024-12-16 22:22:56.290429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.953 [2024-12-16 22:22:56.290437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.954 [2024-12-16 22:22:56.290582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.023 ms, result 0 00:26:50.963 00:26:50.963 00:26:50.963 22:22:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:53.512 22:22:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:53.512 [2024-12-16 22:22:59.311797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:53.512 [2024-12-16 22:22:59.311975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94360 ] 00:26:53.512 [2024-12-16 22:22:59.476311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.512 [2024-12-16 22:22:59.505123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.512 [2024-12-16 22:22:59.616821] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.512 [2024-12-16 22:22:59.616931] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.512 [2024-12-16 22:22:59.778408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.778473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:53.512 [2024-12-16 22:22:59.778496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:53.512 [2024-12-16 22:22:59.778509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.778569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.778580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.512 [2024-12-16 22:22:59.778589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:53.512 [2024-12-16 22:22:59.778597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.778623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:53.512 [2024-12-16 22:22:59.778997] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:53.512 [2024-12-16 22:22:59.779029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.779038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.512 [2024-12-16 22:22:59.779054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:26:53.512 [2024-12-16 22:22:59.779062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.780935] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:53.512 [2024-12-16 22:22:59.784750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.784803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:53.512 [2024-12-16 22:22:59.784821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:26:53.512 [2024-12-16 22:22:59.784833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.784923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.784939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:53.512 [2024-12-16 22:22:59.784947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:53.512 [2024-12-16 22:22:59.784955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.793176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.793231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.512 [2024-12-16 22:22:59.793246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:26:53.512 [2024-12-16 22:22:59.793254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.793357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.793367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.512 [2024-12-16 22:22:59.793378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:53.512 [2024-12-16 22:22:59.793386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.793443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.793453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:53.512 [2024-12-16 22:22:59.793466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:53.512 [2024-12-16 22:22:59.793477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.793504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:53.512 [2024-12-16 22:22:59.795534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.795579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.512 [2024-12-16 22:22:59.795593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:26:53.512 [2024-12-16 22:22:59.795605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.795641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.795649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:53.512 [2024-12-16 22:22:59.795658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:53.512 [2024-12-16 22:22:59.795668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.795690] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:53.512 [2024-12-16 22:22:59.795713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:53.512 [2024-12-16 22:22:59.795749] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:53.512 [2024-12-16 22:22:59.795766] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:53.512 [2024-12-16 22:22:59.795893] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:53.512 [2024-12-16 22:22:59.795905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:53.512 [2024-12-16 22:22:59.795918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:53.512 [2024-12-16 22:22:59.795931] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:53.512 [2024-12-16 22:22:59.795943] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:53.512 [2024-12-16 22:22:59.795951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:53.512 [2024-12-16 22:22:59.795960] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:53.512 [2024-12-16 22:22:59.795971] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:53.512 [2024-12-16 22:22:59.795979] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:53.512 [2024-12-16 22:22:59.795987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.795995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:53.512 [2024-12-16 22:22:59.796002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:26:53.512 [2024-12-16 22:22:59.796009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.796094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.512 [2024-12-16 22:22:59.796103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:53.512 [2024-12-16 22:22:59.796110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:53.512 [2024-12-16 22:22:59.796117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.512 [2024-12-16 22:22:59.796214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:53.512 [2024-12-16 22:22:59.796234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:53.512 [2024-12-16 22:22:59.796252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.512 [2024-12-16 22:22:59.796267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.512 [2024-12-16 22:22:59.796276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:53.512 [2024-12-16 22:22:59.796284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:53.512 [2024-12-16 22:22:59.796293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:53.512 [2024-12-16 22:22:59.796301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:53.512 [2024-12-16 22:22:59.796310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:53.512 [2024-12-16 22:22:59.796318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.512 [2024-12-16 22:22:59.796326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:53.512 [2024-12-16 22:22:59.796334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:53.512 [2024-12-16 22:22:59.796344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.512 [2024-12-16 22:22:59.796352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:53.512 [2024-12-16 22:22:59.796360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:53.512 [2024-12-16 22:22:59.796368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:53.513 [2024-12-16 22:22:59.796383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:53.513 [2024-12-16 22:22:59.796409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:53.513 [2024-12-16 22:22:59.796432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:53.513 [2024-12-16 22:22:59.796456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:53.513 [2024-12-16 22:22:59.796479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:53.513 [2024-12-16 22:22:59.796502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.513 [2024-12-16 22:22:59.796520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:53.513 [2024-12-16 22:22:59.796527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:53.513 [2024-12-16 22:22:59.796534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.513 [2024-12-16 22:22:59.796542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:53.513 [2024-12-16 22:22:59.796550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:53.513 [2024-12-16 22:22:59.796557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:53.513 [2024-12-16 22:22:59.796571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:53.513 [2024-12-16 22:22:59.796578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:53.513 [2024-12-16 22:22:59.796602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:53.513 [2024-12-16 22:22:59.796611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.513 [2024-12-16 22:22:59.796628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:53.513 [2024-12-16 22:22:59.796636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:53.513 [2024-12-16 22:22:59.796644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:53.513 [2024-12-16 22:22:59.796654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:53.513 [2024-12-16 22:22:59.796661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:53.513 [2024-12-16 22:22:59.796669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:53.513 [2024-12-16 22:22:59.796677] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:53.513 [2024-12-16 22:22:59.796687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:53.513 [2024-12-16 22:22:59.796703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:53.513 [2024-12-16 22:22:59.796711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:53.513 [2024-12-16 22:22:59.796717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:53.513 [2024-12-16 22:22:59.796724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:53.513 [2024-12-16 22:22:59.796731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:53.513 [2024-12-16 22:22:59.796738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:53.513 [2024-12-16 22:22:59.796745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:53.513 [2024-12-16 22:22:59.796752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:53.513 [2024-12-16 22:22:59.796758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:53.513 [2024-12-16 22:22:59.796796] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:53.513 [2024-12-16 22:22:59.796804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.513 [2024-12-16 22:22:59.796819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:53.513 [2024-12-16 22:22:59.796826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:53.513 [2024-12-16 22:22:59.796833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:53.513 [2024-12-16 22:22:59.796860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.796869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:53.513 [2024-12-16 22:22:59.796877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:26:53.513 [2024-12-16 22:22:59.796888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.810105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.810165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.513 [2024-12-16 22:22:59.810180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:26:53.513 [2024-12-16 22:22:59.810188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.810268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.810277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:53.513 [2024-12-16 22:22:59.810286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:53.513 [2024-12-16 22:22:59.810298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.827655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.827712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.513 [2024-12-16 22:22:59.827732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:26:53.513 [2024-12-16 22:22:59.827741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.827788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.827798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.513 [2024-12-16 22:22:59.827812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:53.513 [2024-12-16 22:22:59.827824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.828367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.828408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.513 [2024-12-16 22:22:59.828419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:26:53.513 [2024-12-16 22:22:59.828428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.828579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.828593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.513 [2024-12-16 22:22:59.828606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:26:53.513 [2024-12-16 22:22:59.828618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.835772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.835816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.513 [2024-12-16 22:22:59.835833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.128 ms 00:26:53.513 [2024-12-16 22:22:59.835874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.839360] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:53.513 [2024-12-16 22:22:59.839406] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:53.513 [2024-12-16 22:22:59.839422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.839430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:53.513 [2024-12-16 22:22:59.839439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.462 ms 00:26:53.513 [2024-12-16 22:22:59.839446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.854969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.855019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:53.513 [2024-12-16 22:22:59.855040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.474 ms 00:26:53.513 [2024-12-16 22:22:59.855049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.513 [2024-12-16 22:22:59.857619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.513 [2024-12-16 22:22:59.857670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:53.513 [2024-12-16 22:22:59.857680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.521 ms 00:26:53.514 [2024-12-16 22:22:59.857687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.775 [2024-12-16 22:22:59.860291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.775 [2024-12-16 22:22:59.860340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:53.775 [2024-12-16 22:22:59.860351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:26:53.775 [2024-12-16 22:22:59.860358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.775 [2024-12-16 22:22:59.860700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.775 [2024-12-16 22:22:59.860734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:53.775 [2024-12-16 22:22:59.860747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:26:53.775 [2024-12-16 22:22:59.860760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.775 [2024-12-16 22:22:59.883047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.775 [2024-12-16 22:22:59.883110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:53.775 [2024-12-16 22:22:59.883122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.255 ms 00:26:53.775 [2024-12-16 22:22:59.883131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.775 [2024-12-16 22:22:59.891052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:53.775 [2024-12-16 22:22:59.893874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.775 [2024-12-16 22:22:59.893910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:53.775 [2024-12-16 22:22:59.893921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.696 ms 00:26:53.776 [2024-12-16 22:22:59.893930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.894002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.894017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:53.776 [2024-12-16 22:22:59.894030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:53.776 [2024-12-16 22:22:59.894038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.895724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.895772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:53.776 [2024-12-16 22:22:59.895788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:26:53.776 [2024-12-16 22:22:59.895796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.895822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.895847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:53.776 [2024-12-16 22:22:59.895856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:53.776 [2024-12-16 22:22:59.895864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.895902] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:53.776 [2024-12-16 22:22:59.895915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.895924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:53.776 [2024-12-16 22:22:59.895937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:53.776 [2024-12-16 22:22:59.895945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.901031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.901075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.776 [2024-12-16 22:22:59.901085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.068 ms 00:26:53.776 [2024-12-16 22:22:59.901093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.901178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.776 [2024-12-16 22:22:59.901187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.776 [2024-12-16 22:22:59.901201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:53.776 [2024-12-16 22:22:59.901216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.776 [2024-12-16 22:22:59.902258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.425 ms, result 0 00:26:55.161  [2024-12-16T22:23:02.451Z] Copying: 1000/1048576 [kB] (1000 kBps) [2024-12-16T22:23:03.394Z] Copying: 4184/1048576 [kB] (3184 kBps) [2024-12-16T22:23:04.337Z] Copying: 14/1024 [MB] (10 MBps) [2024-12-16T22:23:05.279Z] Copying: 33/1024 [MB] (18 MBps) [2024-12-16T22:23:06.221Z] Copying: 59/1024 [MB] (26 MBps) [2024-12-16T22:23:07.165Z] Copying: 92/1024 [MB] (33 MBps) [2024-12-16T22:23:08.107Z] Copying: 114/1024 [MB] (21 MBps) [2024-12-16T22:23:09.494Z] Copying: 140/1024 [MB] (26 MBps) [2024-12-16T22:23:10.437Z] Copying: 174/1024 [MB] (34 MBps) [2024-12-16T22:23:11.381Z] Copying: 204/1024 [MB] (29 MBps) [2024-12-16T22:23:12.325Z] Copying: 237/1024 [MB] (32 MBps) [2024-12-16T22:23:13.270Z] Copying: 255/1024 [MB] (18 MBps) [2024-12-16T22:23:14.213Z] Copying: 284/1024 [MB] (28 MBps) [2024-12-16T22:23:15.157Z] Copying: 314/1024 [MB] (30 MBps) [2024-12-16T22:23:16.103Z] Copying: 344/1024 [MB] (29 MBps) [2024-12-16T22:23:17.491Z] Copying: 373/1024 [MB] (28 MBps) [2024-12-16T22:23:18.437Z] Copying: 403/1024 [MB] (29 MBps) [2024-12-16T22:23:19.382Z] Copying: 426/1024 [MB] (23 MBps) [2024-12-16T22:23:20.407Z] Copying: 455/1024 [MB] (28 MBps) [2024-12-16T22:23:21.375Z] Copying: 478/1024 [MB] (23 MBps) [2024-12-16T22:23:22.319Z] Copying: 514/1024 [MB] (35 MBps) [2024-12-16T22:23:23.261Z] Copying: 545/1024 [MB] (31 MBps) [2024-12-16T22:23:24.206Z] Copying: 575/1024 [MB] (29 MBps) [2024-12-16T22:23:25.150Z] Copying: 607/1024 [MB] (32 MBps) [2024-12-16T22:23:26.092Z] Copying: 637/1024 [MB] (29 MBps) [2024-12-16T22:23:27.478Z] Copying: 662/1024 [MB] (25 MBps) [2024-12-16T22:23:28.423Z] Copying: 692/1024 [MB] (30 MBps) [2024-12-16T22:23:29.368Z] Copying: 722/1024 [MB] (29 MBps) [2024-12-16T22:23:30.314Z] Copying: 752/1024 [MB] (30 MBps) [2024-12-16T22:23:31.260Z] Copying: 781/1024 [MB] (29 MBps) [2024-12-16T22:23:32.205Z] Copying: 811/1024 [MB] (29 MBps) [2024-12-16T22:23:33.150Z] Copying: 842/1024 [MB] (31 MBps) [2024-12-16T22:23:34.095Z] Copying: 868/1024 [MB] (26 MBps) [2024-12-16T22:23:35.089Z] Copying: 897/1024 [MB] (28 MBps) [2024-12-16T22:23:36.474Z] Copying: 927/1024 [MB] (29 MBps) [2024-12-16T22:23:37.417Z] Copying: 961/1024 [MB] (33 MBps) [2024-12-16T22:23:38.361Z] Copying: 984/1024 [MB] (23 MBps) [2024-12-16T22:23:38.361Z] Copying: 1017/1024 [MB] (32 MBps) [2024-12-16T22:23:38.623Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-16 22:23:38.507219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.276 [2024-12-16 22:23:38.507285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:32.276 [2024-12-16 22:23:38.507299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:32.276 [2024-12-16 22:23:38.507308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.276 [2024-12-16 22:23:38.507329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:32.276 [2024-12-16 22:23:38.507928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.276 [2024-12-16 22:23:38.507961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:32.276 [2024-12-16 22:23:38.507970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:27:32.276 [2024-12-16 22:23:38.507979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.276 [2024-12-16 22:23:38.508195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.276 [2024-12-16 22:23:38.508212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:32.276 [2024-12-16 22:23:38.508222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:27:32.276 [2024-12-16 22:23:38.508230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.276 [2024-12-16 22:23:38.519103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.276 [2024-12-16 22:23:38.519137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:32.276 [2024-12-16 22:23:38.519150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.856 ms 00:27:32.277 [2024-12-16 22:23:38.519156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.524193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.524223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:32.277 [2024-12-16 22:23:38.524231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:27:32.277 [2024-12-16 22:23:38.524239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.525271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.525305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:32.277 [2024-12-16 22:23:38.525312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:27:32.277 [2024-12-16 22:23:38.525318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.528626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.528664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:32.277 [2024-12-16 22:23:38.528671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:27:32.277 [2024-12-16 22:23:38.528677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.530429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.530469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.277 [2024-12-16 22:23:38.530477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:27:32.277 [2024-12-16 22:23:38.530491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.532211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.532241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:32.277 [2024-12-16 22:23:38.532248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:27:32.277 [2024-12-16 22:23:38.532254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.533416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.533447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:32.277 [2024-12-16 22:23:38.533454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:27:32.277 [2024-12-16 22:23:38.533460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.534879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.534909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.277 [2024-12-16 22:23:38.534916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:27:32.277 [2024-12-16 22:23:38.534921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.535850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.277 [2024-12-16 22:23:38.535881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.277 [2024-12-16 22:23:38.535888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:27:32.277 [2024-12-16 22:23:38.535894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.277 [2024-12-16 22:23:38.535917] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.277 [2024-12-16 22:23:38.535934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:32.277 [2024-12-16 22:23:38.535942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:32.277 [2024-12-16 22:23:38.535949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.535996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.277 [2024-12-16 22:23:38.536302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.278 [2024-12-16 22:23:38.536543] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.278 [2024-12-16 22:23:38.536553] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee7c4cb9-b627-4ec2-9321-998c9eed227b 00:27:32.278 [2024-12-16 22:23:38.536560] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:32.278 [2024-12-16 22:23:38.536566] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159424 00:27:32.278 [2024-12-16 22:23:38.536571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157440 00:27:32.278 [2024-12-16 22:23:38.536580] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:27:32.278 [2024-12-16 22:23:38.536590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.278 [2024-12-16 22:23:38.536596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.278 [2024-12-16 22:23:38.536605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.278 [2024-12-16 22:23:38.536610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.278 [2024-12-16 22:23:38.536619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.278 [2024-12-16 22:23:38.536625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.278 [2024-12-16 22:23:38.536630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.278 [2024-12-16 22:23:38.536636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:27:32.278 [2024-12-16 22:23:38.536642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.537980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.278 [2024-12-16 22:23:38.538003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.278 [2024-12-16 22:23:38.538015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:27:32.278 [2024-12-16 22:23:38.538022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.538093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.278 [2024-12-16 22:23:38.538103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.278 [2024-12-16 22:23:38.538112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:32.278 [2024-12-16 22:23:38.538117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.542480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.542503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.278 [2024-12-16 22:23:38.542510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.542516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.542558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.542566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.278 [2024-12-16 22:23:38.542573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.542578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.542623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.542630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.278 [2024-12-16 22:23:38.542636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.542642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.542653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.542659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.278 [2024-12-16 22:23:38.542665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.542673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.550446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.550480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.278 [2024-12-16 22:23:38.550488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.550494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.556846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.556876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.278 [2024-12-16 22:23:38.556888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.556895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.556914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.556920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.278 [2024-12-16 22:23:38.556926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.556932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.556970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.556977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.278 [2024-12-16 22:23:38.556983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.556989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.557045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.557057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.278 [2024-12-16 22:23:38.557063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.557068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.557089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.557099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.278 [2024-12-16 22:23:38.557105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.557111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.557146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.278 [2024-12-16 22:23:38.557153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.278 [2024-12-16 22:23:38.557159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.278 [2024-12-16 22:23:38.557165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.278 [2024-12-16 22:23:38.557196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.279 [2024-12-16 22:23:38.557203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.279 [2024-12-16 22:23:38.557208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.279 [2024-12-16 22:23:38.557214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.279 [2024-12-16 22:23:38.557311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.076 ms, result 0 00:27:32.540 00:27:32.540 00:27:32.540 22:23:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:35.087 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:35.087 22:23:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:35.087 [2024-12-16 22:23:40.959901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:35.087 [2024-12-16 22:23:40.960026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94786 ] 00:27:35.087 [2024-12-16 22:23:41.120132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.087 [2024-12-16 22:23:41.148496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.087 [2024-12-16 22:23:41.264600] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.087 [2024-12-16 22:23:41.264704] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:35.087 [2024-12-16 22:23:41.425601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.087 [2024-12-16 22:23:41.425661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:35.087 [2024-12-16 22:23:41.425680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:35.087 [2024-12-16 22:23:41.425688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.087 [2024-12-16 22:23:41.425742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.087 [2024-12-16 22:23:41.425754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.087 [2024-12-16 22:23:41.425763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:35.087 [2024-12-16 22:23:41.425772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.087 [2024-12-16 22:23:41.425800] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:35.088 [2024-12-16 22:23:41.426094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:35.088 [2024-12-16 22:23:41.426120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.088 [2024-12-16 22:23:41.426133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.088 [2024-12-16 22:23:41.426144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:27:35.088 [2024-12-16 22:23:41.426152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.088 [2024-12-16 22:23:41.428141] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:35.088 [2024-12-16 22:23:41.431752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.088 [2024-12-16 22:23:41.431800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:35.088 [2024-12-16 22:23:41.431818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:27:35.088 [2024-12-16 22:23:41.431830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.088 [2024-12-16 22:23:41.431918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.088 [2024-12-16 22:23:41.431929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:35.088 [2024-12-16 22:23:41.431939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:35.088 [2024-12-16 22:23:41.431946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.439967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.350 [2024-12-16 22:23:41.440011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.350 [2024-12-16 22:23:41.440024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.979 ms 00:27:35.350 [2024-12-16 22:23:41.440032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.440131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.350 [2024-12-16 22:23:41.440145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.350 [2024-12-16 22:23:41.440157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:35.350 [2024-12-16 22:23:41.440165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.440223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.350 [2024-12-16 22:23:41.440233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:35.350 [2024-12-16 22:23:41.440241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:35.350 [2024-12-16 22:23:41.440254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.440277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:35.350 [2024-12-16 22:23:41.442296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.350 [2024-12-16 22:23:41.442330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.350 [2024-12-16 22:23:41.442340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:27:35.350 [2024-12-16 22:23:41.442351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.442389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.350 [2024-12-16 22:23:41.442401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:35.350 [2024-12-16 22:23:41.442410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:35.350 [2024-12-16 22:23:41.442423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.350 [2024-12-16 22:23:41.442463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:35.350 [2024-12-16 22:23:41.442487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:35.350 [2024-12-16 22:23:41.442524] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:35.350 [2024-12-16 22:23:41.442541] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:35.350 [2024-12-16 22:23:41.442647] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:35.350 [2024-12-16 22:23:41.442659] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:35.350 [2024-12-16 22:23:41.442678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:35.350 [2024-12-16 22:23:41.442690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:35.350 [2024-12-16 22:23:41.442703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:35.350 [2024-12-16 22:23:41.442711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:35.350 [2024-12-16 22:23:41.442719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:35.350 [2024-12-16 22:23:41.442727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:35.351 [2024-12-16 22:23:41.442734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:35.351 [2024-12-16 22:23:41.442742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.442751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:35.351 [2024-12-16 22:23:41.442761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:27:35.351 [2024-12-16 22:23:41.442772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.442875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.442885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:35.351 [2024-12-16 22:23:41.442892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:27:35.351 [2024-12-16 22:23:41.442900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.443001] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:35.351 [2024-12-16 22:23:41.443013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:35.351 [2024-12-16 22:23:41.443023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:35.351 [2024-12-16 22:23:41.443059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:35.351 [2024-12-16 22:23:41.443085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.351 [2024-12-16 22:23:41.443103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:35.351 [2024-12-16 22:23:41.443111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:35.351 [2024-12-16 22:23:41.443118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:35.351 [2024-12-16 22:23:41.443127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:35.351 [2024-12-16 22:23:41.443136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:35.351 [2024-12-16 22:23:41.443145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:35.351 [2024-12-16 22:23:41.443162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:35.351 [2024-12-16 22:23:41.443187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:35.351 [2024-12-16 22:23:41.443212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:35.351 [2024-12-16 22:23:41.443241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:35.351 [2024-12-16 22:23:41.443263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:35.351 [2024-12-16 22:23:41.443286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.351 [2024-12-16 22:23:41.443301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:35.351 [2024-12-16 22:23:41.443309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:35.351 [2024-12-16 22:23:41.443317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:35.351 [2024-12-16 22:23:41.443325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:35.351 [2024-12-16 22:23:41.443332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:35.351 [2024-12-16 22:23:41.443339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:35.351 [2024-12-16 22:23:41.443357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:35.351 [2024-12-16 22:23:41.443365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443373] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:35.351 [2024-12-16 22:23:41.443386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:35.351 [2024-12-16 22:23:41.443394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:35.351 [2024-12-16 22:23:41.443413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:35.351 [2024-12-16 22:23:41.443423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:35.351 [2024-12-16 22:23:41.443431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:35.351 [2024-12-16 22:23:41.443439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:35.351 [2024-12-16 22:23:41.443446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:35.351 [2024-12-16 22:23:41.443454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:35.351 [2024-12-16 22:23:41.443465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:35.351 [2024-12-16 22:23:41.443481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:35.351 [2024-12-16 22:23:41.443498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:35.351 [2024-12-16 22:23:41.443508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:35.351 [2024-12-16 22:23:41.443515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:35.351 [2024-12-16 22:23:41.443523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:35.351 [2024-12-16 22:23:41.443530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:35.351 [2024-12-16 22:23:41.443537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:35.351 [2024-12-16 22:23:41.443543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:35.351 [2024-12-16 22:23:41.443550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:35.351 [2024-12-16 22:23:41.443557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:35.351 [2024-12-16 22:23:41.443594] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:35.351 [2024-12-16 22:23:41.443602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:35.351 [2024-12-16 22:23:41.443618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:35.351 [2024-12-16 22:23:41.443628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:35.351 [2024-12-16 22:23:41.443636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:35.351 [2024-12-16 22:23:41.443644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.443651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:35.351 [2024-12-16 22:23:41.443659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:27:35.351 [2024-12-16 22:23:41.443669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.457465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.457510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.351 [2024-12-16 22:23:41.457527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.749 ms 00:27:35.351 [2024-12-16 22:23:41.457535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.457625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.457635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:35.351 [2024-12-16 22:23:41.457644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:35.351 [2024-12-16 22:23:41.457657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.482126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.482211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.351 [2024-12-16 22:23:41.482235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.408 ms 00:27:35.351 [2024-12-16 22:23:41.482251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.351 [2024-12-16 22:23:41.482332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.351 [2024-12-16 22:23:41.482352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.351 [2024-12-16 22:23:41.482379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:35.352 [2024-12-16 22:23:41.482403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.483150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.483209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.352 [2024-12-16 22:23:41.483228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:27:35.352 [2024-12-16 22:23:41.483243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.483509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.483529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.352 [2024-12-16 22:23:41.483546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:27:35.352 [2024-12-16 22:23:41.483561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.492180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.492227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.352 [2024-12-16 22:23:41.492237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.582 ms 00:27:35.352 [2024-12-16 22:23:41.492245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.496104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:35.352 [2024-12-16 22:23:41.496153] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:35.352 [2024-12-16 22:23:41.496169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.496178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:35.352 [2024-12-16 22:23:41.496187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.815 ms 00:27:35.352 [2024-12-16 22:23:41.496194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.511810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.511872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:35.352 [2024-12-16 22:23:41.511883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.564 ms 00:27:35.352 [2024-12-16 22:23:41.511891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.514725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.514771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:35.352 [2024-12-16 22:23:41.514780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:27:35.352 [2024-12-16 22:23:41.514788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.517424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.517472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:35.352 [2024-12-16 22:23:41.517483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:27:35.352 [2024-12-16 22:23:41.517501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.517918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.517944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:35.352 [2024-12-16 22:23:41.517954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:27:35.352 [2024-12-16 22:23:41.517967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.540936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.541000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:35.352 [2024-12-16 22:23:41.541014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.946 ms 00:27:35.352 [2024-12-16 22:23:41.541022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.549313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:35.352 [2024-12-16 22:23:41.552368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.552408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:35.352 [2024-12-16 22:23:41.552426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.296 ms 00:27:35.352 [2024-12-16 22:23:41.552439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.552512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.552523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:35.352 [2024-12-16 22:23:41.552532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:35.352 [2024-12-16 22:23:41.552539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.553328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.553373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:35.352 [2024-12-16 22:23:41.553384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:27:35.352 [2024-12-16 22:23:41.553392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.553424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.553433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:35.352 [2024-12-16 22:23:41.553441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:35.352 [2024-12-16 22:23:41.553449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.553490] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:35.352 [2024-12-16 22:23:41.553500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.553508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:35.352 [2024-12-16 22:23:41.553522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:35.352 [2024-12-16 22:23:41.553529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.559231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.559283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:35.352 [2024-12-16 22:23:41.559294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.682 ms 00:27:35.352 [2024-12-16 22:23:41.559303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.559386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.352 [2024-12-16 22:23:41.559396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:35.352 [2024-12-16 22:23:41.559405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:35.352 [2024-12-16 22:23:41.559428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.352 [2024-12-16 22:23:41.560646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.569 ms, result 0 00:27:36.738  [2024-12-16T22:23:44.063Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-16T22:23:45.005Z] Copying: 30/1024 [MB] (16 MBps) [2024-12-16T22:23:45.945Z] Copying: 50/1024 [MB] (20 MBps) [2024-12-16T22:23:46.889Z] Copying: 72/1024 [MB] (21 MBps) [2024-12-16T22:23:47.833Z] Copying: 95/1024 [MB] (23 MBps) [2024-12-16T22:23:48.777Z] Copying: 114/1024 [MB] (18 MBps) [2024-12-16T22:23:50.166Z] Copying: 137/1024 [MB] (23 MBps) [2024-12-16T22:23:50.739Z] Copying: 158/1024 [MB] (21 MBps) [2024-12-16T22:23:52.126Z] Copying: 180/1024 [MB] (21 MBps) [2024-12-16T22:23:53.070Z] Copying: 213/1024 [MB] (32 MBps) [2024-12-16T22:23:54.014Z] Copying: 238/1024 [MB] (25 MBps) [2024-12-16T22:23:54.958Z] Copying: 256/1024 [MB] (18 MBps) [2024-12-16T22:23:55.899Z] Copying: 274/1024 [MB] (18 MBps) [2024-12-16T22:23:56.842Z] Copying: 292/1024 [MB] (17 MBps) [2024-12-16T22:23:57.785Z] Copying: 312/1024 [MB] (19 MBps) [2024-12-16T22:23:59.172Z] Copying: 323/1024 [MB] (11 MBps) [2024-12-16T22:23:59.744Z] Copying: 336/1024 [MB] (13 MBps) [2024-12-16T22:24:01.130Z] Copying: 347/1024 [MB] (10 MBps) [2024-12-16T22:24:02.074Z] Copying: 357/1024 [MB] (10 MBps) [2024-12-16T22:24:03.014Z] Copying: 369/1024 [MB] (12 MBps) [2024-12-16T22:24:03.950Z] Copying: 380/1024 [MB] (10 MBps) [2024-12-16T22:24:04.894Z] Copying: 394/1024 [MB] (14 MBps) [2024-12-16T22:24:05.836Z] Copying: 410/1024 [MB] (15 MBps) [2024-12-16T22:24:06.797Z] Copying: 424/1024 [MB] (14 MBps) [2024-12-16T22:24:07.761Z] Copying: 436/1024 [MB] (11 MBps) [2024-12-16T22:24:09.146Z] Copying: 447/1024 [MB] (10 MBps) [2024-12-16T22:24:10.092Z] Copying: 459/1024 [MB] (11 MBps) [2024-12-16T22:24:11.035Z] Copying: 471/1024 [MB] (12 MBps) [2024-12-16T22:24:11.978Z] Copying: 497/1024 [MB] (25 MBps) [2024-12-16T22:24:12.922Z] Copying: 519/1024 [MB] (21 MBps) [2024-12-16T22:24:13.865Z] Copying: 539/1024 [MB] (20 MBps) [2024-12-16T22:24:14.808Z] Copying: 550/1024 [MB] (11 MBps) [2024-12-16T22:24:15.750Z] Copying: 566/1024 [MB] (15 MBps) [2024-12-16T22:24:17.137Z] Copying: 579/1024 [MB] (12 MBps) [2024-12-16T22:24:18.079Z] Copying: 593/1024 [MB] (14 MBps) [2024-12-16T22:24:19.022Z] Copying: 617/1024 [MB] (24 MBps) [2024-12-16T22:24:19.964Z] Copying: 642/1024 [MB] (25 MBps) [2024-12-16T22:24:20.908Z] Copying: 655/1024 [MB] (12 MBps) [2024-12-16T22:24:21.853Z] Copying: 671/1024 [MB] (16 MBps) [2024-12-16T22:24:22.796Z] Copying: 689/1024 [MB] (17 MBps) [2024-12-16T22:24:23.739Z] Copying: 704/1024 [MB] (14 MBps) [2024-12-16T22:24:25.125Z] Copying: 717/1024 [MB] (13 MBps) [2024-12-16T22:24:26.067Z] Copying: 730/1024 [MB] (12 MBps) [2024-12-16T22:24:27.011Z] Copying: 747/1024 [MB] (17 MBps) [2024-12-16T22:24:27.954Z] Copying: 758/1024 [MB] (10 MBps) [2024-12-16T22:24:28.898Z] Copying: 769/1024 [MB] (11 MBps) [2024-12-16T22:24:29.884Z] Copying: 783/1024 [MB] (13 MBps) [2024-12-16T22:24:30.852Z] Copying: 796/1024 [MB] (13 MBps) [2024-12-16T22:24:31.796Z] Copying: 818/1024 [MB] (21 MBps) [2024-12-16T22:24:32.739Z] Copying: 837/1024 [MB] (18 MBps) [2024-12-16T22:24:34.125Z] Copying: 856/1024 [MB] (19 MBps) [2024-12-16T22:24:35.069Z] Copying: 874/1024 [MB] (17 MBps) [2024-12-16T22:24:36.011Z] Copying: 890/1024 [MB] (16 MBps) [2024-12-16T22:24:36.954Z] Copying: 905/1024 [MB] (15 MBps) [2024-12-16T22:24:37.889Z] Copying: 918/1024 [MB] (13 MBps) [2024-12-16T22:24:38.826Z] Copying: 931/1024 [MB] (12 MBps) [2024-12-16T22:24:39.767Z] Copying: 947/1024 [MB] (15 MBps) [2024-12-16T22:24:41.141Z] Copying: 959/1024 [MB] (12 MBps) [2024-12-16T22:24:42.082Z] Copying: 974/1024 [MB] (15 MBps) [2024-12-16T22:24:43.020Z] Copying: 991/1024 [MB] (16 MBps) [2024-12-16T22:24:43.955Z] Copying: 1002/1024 [MB] (11 MBps) [2024-12-16T22:24:44.527Z] Copying: 1015/1024 [MB] (13 MBps) [2024-12-16T22:24:45.101Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 22:24:44.947283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.947387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.754 [2024-12-16 22:24:44.947413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.754 [2024-12-16 22:24:44.947425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.947456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:38.754 [2024-12-16 22:24:44.948467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.948510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.754 [2024-12-16 22:24:44.948523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:28:38.754 [2024-12-16 22:24:44.948534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.948809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.948822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.754 [2024-12-16 22:24:44.948832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:28:38.754 [2024-12-16 22:24:44.948866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.952366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.952389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.754 [2024-12-16 22:24:44.952402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.484 ms 00:28:38.754 [2024-12-16 22:24:44.952418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.959497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.959537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.754 [2024-12-16 22:24:44.959550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.059 ms 00:28:38.754 [2024-12-16 22:24:44.959568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.962801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.962862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.754 [2024-12-16 22:24:44.962874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:28:38.754 [2024-12-16 22:24:44.962883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.968704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.968752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.754 [2024-12-16 22:24:44.968765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:28:38.754 [2024-12-16 22:24:44.968775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.973767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.973806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.754 [2024-12-16 22:24:44.973848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.905 ms 00:28:38.754 [2024-12-16 22:24:44.973866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.977554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.977604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:38.754 [2024-12-16 22:24:44.977617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:28:38.754 [2024-12-16 22:24:44.977625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.980957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.980994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:38.754 [2024-12-16 22:24:44.981006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:28:38.754 [2024-12-16 22:24:44.981015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.983413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.983452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.754 [2024-12-16 22:24:44.983463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:28:38.754 [2024-12-16 22:24:44.983471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.986411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.754 [2024-12-16 22:24:44.986452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.754 [2024-12-16 22:24:44.986462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:28:38.754 [2024-12-16 22:24:44.986469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.754 [2024-12-16 22:24:44.986523] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.754 [2024-12-16 22:24:44.986543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:38.754 [2024-12-16 22:24:44.986556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:38.754 [2024-12-16 22:24:44.986566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.754 [2024-12-16 22:24:44.986629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.986993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.755 [2024-12-16 22:24:44.987442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.755 [2024-12-16 22:24:44.987451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ee7c4cb9-b627-4ec2-9321-998c9eed227b 00:28:38.756 [2024-12-16 22:24:44.987460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:38.756 [2024-12-16 22:24:44.987467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.756 [2024-12-16 22:24:44.987476] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.756 [2024-12-16 22:24:44.987484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.756 [2024-12-16 22:24:44.987492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.756 [2024-12-16 22:24:44.987501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.756 [2024-12-16 22:24:44.987521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.756 [2024-12-16 22:24:44.987539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.756 [2024-12-16 22:24:44.987547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.756 [2024-12-16 22:24:44.987556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.756 [2024-12-16 22:24:44.987568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.756 [2024-12-16 22:24:44.987579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:28:38.756 [2024-12-16 22:24:44.987587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:44.990795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.756 [2024-12-16 22:24:44.990834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.756 [2024-12-16 22:24:44.990860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:28:38.756 [2024-12-16 22:24:44.990869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:44.991026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.756 [2024-12-16 22:24:44.991037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.756 [2024-12-16 22:24:44.991050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:38.756 [2024-12-16 22:24:44.991059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.001252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.001296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.756 [2024-12-16 22:24:45.001313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.001322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.001382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.001392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.756 [2024-12-16 22:24:45.001402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.001411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.001487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.001503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.756 [2024-12-16 22:24:45.001512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.001523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.001541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.001554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.756 [2024-12-16 22:24:45.001563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.001571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.020759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.020822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.756 [2024-12-16 22:24:45.020854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.020868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.036453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.036510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.756 [2024-12-16 22:24:45.036523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.036534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.036616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.036627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.756 [2024-12-16 22:24:45.036642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.036650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.036702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.036714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.756 [2024-12-16 22:24:45.036732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.036748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.036856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.036868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.756 [2024-12-16 22:24:45.036878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.036887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.036931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.036946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:38.756 [2024-12-16 22:24:45.036956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.036965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.037018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.037028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.756 [2024-12-16 22:24:45.037037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.037046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.037108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.756 [2024-12-16 22:24:45.037131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.756 [2024-12-16 22:24:45.037143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.756 [2024-12-16 22:24:45.037153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.756 [2024-12-16 22:24:45.037326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.003 ms, result 0 00:28:39.016 00:28:39.016 00:28:39.016 22:24:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:41.562 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92819 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92819 ']' 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 92819 00:28:41.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92819) - No such process 00:28:41.562 Process with pid 92819 is not found 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 92819 is not found' 00:28:41.562 22:24:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:41.823 Remove shared memory files 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:41.823 00:28:41.823 real 4m9.542s 00:28:41.823 user 4m40.335s 00:28:41.823 sys 0m28.948s 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.823 22:24:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:41.823 ************************************ 00:28:41.823 END TEST ftl_dirty_shutdown 00:28:41.823 ************************************ 00:28:42.084 22:24:48 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:42.084 22:24:48 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.084 22:24:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.084 22:24:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:42.084 ************************************ 00:28:42.084 START TEST ftl_upgrade_shutdown 00:28:42.084 ************************************ 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:42.084 * Looking for test storage... 00:28:42.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:42.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.084 --rc genhtml_branch_coverage=1 00:28:42.084 --rc genhtml_function_coverage=1 00:28:42.084 --rc genhtml_legend=1 00:28:42.084 --rc geninfo_all_blocks=1 00:28:42.084 --rc geninfo_unexecuted_blocks=1 00:28:42.084 00:28:42.084 ' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:42.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.084 --rc genhtml_branch_coverage=1 00:28:42.084 --rc genhtml_function_coverage=1 00:28:42.084 --rc genhtml_legend=1 00:28:42.084 --rc geninfo_all_blocks=1 00:28:42.084 --rc geninfo_unexecuted_blocks=1 00:28:42.084 00:28:42.084 ' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:42.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.084 --rc genhtml_branch_coverage=1 00:28:42.084 --rc genhtml_function_coverage=1 00:28:42.084 --rc genhtml_legend=1 00:28:42.084 --rc geninfo_all_blocks=1 00:28:42.084 --rc geninfo_unexecuted_blocks=1 00:28:42.084 00:28:42.084 ' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:42.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.084 --rc genhtml_branch_coverage=1 00:28:42.084 --rc genhtml_function_coverage=1 00:28:42.084 --rc genhtml_legend=1 00:28:42.084 --rc geninfo_all_blocks=1 00:28:42.084 --rc geninfo_unexecuted_blocks=1 00:28:42.084 00:28:42.084 ' 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:42.084 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95542 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95542 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95542 ']' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.085 22:24:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.347 [2024-12-16 22:24:48.471684] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:42.347 [2024-12-16 22:24:48.471863] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95542 ] 00:28:42.347 [2024-12-16 22:24:48.633860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.347 [2024-12-16 22:24:48.674870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:43.291 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:43.292 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:43.292 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:43.292 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:43.552 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:43.552 { 00:28:43.552 "name": "basen1", 00:28:43.552 "aliases": [ 00:28:43.552 "c3ee1bc7-103c-47c0-9a07-ae37da6ba27c" 00:28:43.552 ], 00:28:43.552 "product_name": "NVMe disk", 00:28:43.552 "block_size": 4096, 00:28:43.552 "num_blocks": 1310720, 00:28:43.552 "uuid": "c3ee1bc7-103c-47c0-9a07-ae37da6ba27c", 00:28:43.552 "numa_id": -1, 00:28:43.552 "assigned_rate_limits": { 00:28:43.552 "rw_ios_per_sec": 0, 00:28:43.552 "rw_mbytes_per_sec": 0, 00:28:43.552 "r_mbytes_per_sec": 0, 00:28:43.552 "w_mbytes_per_sec": 0 00:28:43.552 }, 00:28:43.552 "claimed": true, 00:28:43.552 "claim_type": "read_many_write_one", 00:28:43.552 "zoned": false, 00:28:43.552 "supported_io_types": { 00:28:43.552 "read": true, 00:28:43.552 "write": true, 00:28:43.552 "unmap": true, 00:28:43.552 "flush": true, 00:28:43.552 "reset": true, 00:28:43.552 "nvme_admin": true, 00:28:43.552 "nvme_io": true, 00:28:43.552 "nvme_io_md": false, 00:28:43.552 "write_zeroes": true, 00:28:43.552 "zcopy": false, 00:28:43.552 "get_zone_info": false, 00:28:43.552 "zone_management": false, 00:28:43.552 "zone_append": false, 00:28:43.552 "compare": true, 00:28:43.552 "compare_and_write": false, 00:28:43.552 "abort": true, 00:28:43.552 "seek_hole": false, 00:28:43.552 "seek_data": false, 00:28:43.552 "copy": true, 00:28:43.552 "nvme_iov_md": false 00:28:43.552 }, 00:28:43.552 "driver_specific": { 00:28:43.552 "nvme": [ 00:28:43.552 { 00:28:43.552 "pci_address": "0000:00:11.0", 00:28:43.552 "trid": { 00:28:43.552 "trtype": "PCIe", 00:28:43.552 "traddr": "0000:00:11.0" 00:28:43.552 }, 00:28:43.552 "ctrlr_data": { 00:28:43.552 "cntlid": 0, 00:28:43.552 "vendor_id": "0x1b36", 00:28:43.552 "model_number": "QEMU NVMe Ctrl", 00:28:43.552 "serial_number": "12341", 00:28:43.552 "firmware_revision": "8.0.0", 00:28:43.552 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:43.552 "oacs": { 00:28:43.552 "security": 0, 00:28:43.552 "format": 1, 00:28:43.552 "firmware": 0, 00:28:43.552 "ns_manage": 1 00:28:43.552 }, 00:28:43.552 "multi_ctrlr": false, 00:28:43.552 "ana_reporting": false 00:28:43.552 }, 00:28:43.552 "vs": { 00:28:43.552 "nvme_version": "1.4" 00:28:43.552 }, 00:28:43.552 "ns_data": { 00:28:43.552 "id": 1, 00:28:43.552 "can_share": false 00:28:43.552 } 00:28:43.552 } 00:28:43.552 ], 00:28:43.553 "mp_policy": "active_passive" 00:28:43.553 } 00:28:43.553 } 00:28:43.553 ]' 00:28:43.553 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:43.553 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:43.553 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:43.813 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:43.814 22:24:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:43.814 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c 00:28:43.814 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:43.814 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 420b737f-bc92-4cb1-8c1f-3d5d71a5eb4c 00:28:44.075 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:44.336 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b014b0e5-8e80-4939-a559-fb04f301bbfa 00:28:44.336 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b014b0e5-8e80-4939-a559-fb04f301bbfa 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c7fa1123-54e7-4721-995b-dc4cabfa639b 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c7fa1123-54e7-4721-995b-dc4cabfa639b ]] 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c7fa1123-54e7-4721-995b-dc4cabfa639b 5120 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c7fa1123-54e7-4721-995b-dc4cabfa639b 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c7fa1123-54e7-4721-995b-dc4cabfa639b 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c7fa1123-54e7-4721-995b-dc4cabfa639b 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:44.597 22:24:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c7fa1123-54e7-4721-995b-dc4cabfa639b 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:44.857 { 00:28:44.857 "name": "c7fa1123-54e7-4721-995b-dc4cabfa639b", 00:28:44.857 "aliases": [ 00:28:44.857 "lvs/basen1p0" 00:28:44.857 ], 00:28:44.857 "product_name": "Logical Volume", 00:28:44.857 "block_size": 4096, 00:28:44.857 "num_blocks": 5242880, 00:28:44.857 "uuid": "c7fa1123-54e7-4721-995b-dc4cabfa639b", 00:28:44.857 "assigned_rate_limits": { 00:28:44.857 "rw_ios_per_sec": 0, 00:28:44.857 "rw_mbytes_per_sec": 0, 00:28:44.857 "r_mbytes_per_sec": 0, 00:28:44.857 "w_mbytes_per_sec": 0 00:28:44.857 }, 00:28:44.857 "claimed": false, 00:28:44.857 "zoned": false, 00:28:44.857 "supported_io_types": { 00:28:44.857 "read": true, 00:28:44.857 "write": true, 00:28:44.857 "unmap": true, 00:28:44.857 "flush": false, 00:28:44.857 "reset": true, 00:28:44.857 "nvme_admin": false, 00:28:44.857 "nvme_io": false, 00:28:44.857 "nvme_io_md": false, 00:28:44.857 "write_zeroes": true, 00:28:44.857 "zcopy": false, 00:28:44.857 "get_zone_info": false, 00:28:44.857 "zone_management": false, 00:28:44.857 "zone_append": false, 00:28:44.857 "compare": false, 00:28:44.857 "compare_and_write": false, 00:28:44.857 "abort": false, 00:28:44.857 "seek_hole": true, 00:28:44.857 "seek_data": true, 00:28:44.857 "copy": false, 00:28:44.857 "nvme_iov_md": false 00:28:44.857 }, 00:28:44.857 "driver_specific": { 00:28:44.857 "lvol": { 00:28:44.857 "lvol_store_uuid": "b014b0e5-8e80-4939-a559-fb04f301bbfa", 00:28:44.857 "base_bdev": "basen1", 00:28:44.857 "thin_provision": true, 00:28:44.857 "num_allocated_clusters": 0, 00:28:44.857 "snapshot": false, 00:28:44.857 "clone": false, 00:28:44.857 "esnap_clone": false 00:28:44.857 } 00:28:44.857 } 00:28:44.857 } 00:28:44.857 ]' 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:44.857 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:45.118 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:45.118 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:45.118 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:45.379 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:45.379 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:45.379 22:24:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c7fa1123-54e7-4721-995b-dc4cabfa639b -c cachen1p0 --l2p_dram_limit 2 00:28:45.640 [2024-12-16 22:24:51.795414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.795493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:45.640 [2024-12-16 22:24:51.795512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:45.640 [2024-12-16 22:24:51.795524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.795586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.795603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:45.640 [2024-12-16 22:24:51.795615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:45.640 [2024-12-16 22:24:51.795637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.795659] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:45.640 [2024-12-16 22:24:51.795965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:45.640 [2024-12-16 22:24:51.795989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.796003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:45.640 [2024-12-16 22:24:51.796015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:28:45.640 [2024-12-16 22:24:51.796028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.796060] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 22c2e225-9ea6-4375-a243-b0b5e29a1b4f 00:28:45.640 [2024-12-16 22:24:51.798355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.798410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:45.640 [2024-12-16 22:24:51.798424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:45.640 [2024-12-16 22:24:51.798433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.811115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.811162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:45.640 [2024-12-16 22:24:51.811178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.573 ms 00:28:45.640 [2024-12-16 22:24:51.811187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.811264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.811273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:45.640 [2024-12-16 22:24:51.811285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:45.640 [2024-12-16 22:24:51.811298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.811368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.811378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:45.640 [2024-12-16 22:24:51.811389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:45.640 [2024-12-16 22:24:51.811397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.811423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:45.640 [2024-12-16 22:24:51.814259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.814305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:45.640 [2024-12-16 22:24:51.814317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.845 ms 00:28:45.640 [2024-12-16 22:24:51.814328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.814365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.814377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:45.640 [2024-12-16 22:24:51.814386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:45.640 [2024-12-16 22:24:51.814400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.814418] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:45.640 [2024-12-16 22:24:51.814608] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:45.640 [2024-12-16 22:24:51.814624] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:45.640 [2024-12-16 22:24:51.814641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:45.640 [2024-12-16 22:24:51.814654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:45.640 [2024-12-16 22:24:51.814679] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:45.640 [2024-12-16 22:24:51.814688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:45.640 [2024-12-16 22:24:51.814707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:45.640 [2024-12-16 22:24:51.814716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:45.640 [2024-12-16 22:24:51.814727] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:45.640 [2024-12-16 22:24:51.814736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.814747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:45.640 [2024-12-16 22:24:51.814756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:28:45.640 [2024-12-16 22:24:51.814767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.814879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.640 [2024-12-16 22:24:51.814896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:45.640 [2024-12-16 22:24:51.814905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:28:45.640 [2024-12-16 22:24:51.814920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.640 [2024-12-16 22:24:51.815019] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:45.640 [2024-12-16 22:24:51.815035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:45.640 [2024-12-16 22:24:51.815047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:45.640 [2024-12-16 22:24:51.815079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:45.640 [2024-12-16 22:24:51.815099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:45.640 [2024-12-16 22:24:51.815107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:45.640 [2024-12-16 22:24:51.815118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:45.640 [2024-12-16 22:24:51.815139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:45.640 [2024-12-16 22:24:51.815149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:45.640 [2024-12-16 22:24:51.815171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:45.640 [2024-12-16 22:24:51.815183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:45.640 [2024-12-16 22:24:51.815201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:45.640 [2024-12-16 22:24:51.815209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:45.640 [2024-12-16 22:24:51.815228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:45.640 [2024-12-16 22:24:51.815263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:45.640 [2024-12-16 22:24:51.815290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:45.640 [2024-12-16 22:24:51.815319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:45.640 [2024-12-16 22:24:51.815342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:45.640 [2024-12-16 22:24:51.815368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:45.640 [2024-12-16 22:24:51.815394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:45.640 [2024-12-16 22:24:51.815420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:45.640 [2024-12-16 22:24:51.815427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:45.640 [2024-12-16 22:24:51.815444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:45.640 [2024-12-16 22:24:51.815457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.640 [2024-12-16 22:24:51.815475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:45.640 [2024-12-16 22:24:51.815483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:45.640 [2024-12-16 22:24:51.815493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:45.640 [2024-12-16 22:24:51.815499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:45.640 [2024-12-16 22:24:51.815508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:45.640 [2024-12-16 22:24:51.815516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:45.640 [2024-12-16 22:24:51.815527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:45.640 [2024-12-16 22:24:51.815541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.640 [2024-12-16 22:24:51.815553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:45.640 [2024-12-16 22:24:51.815561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:45.640 [2024-12-16 22:24:51.815571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:45.640 [2024-12-16 22:24:51.815578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:45.641 [2024-12-16 22:24:51.815588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:45.641 [2024-12-16 22:24:51.815597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:45.641 [2024-12-16 22:24:51.815609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:45.641 [2024-12-16 22:24:51.815617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:45.641 [2024-12-16 22:24:51.815680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:45.641 [2024-12-16 22:24:51.815688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.641 [2024-12-16 22:24:51.815707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:45.641 [2024-12-16 22:24:51.815717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:45.641 [2024-12-16 22:24:51.815725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:45.641 [2024-12-16 22:24:51.815735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.641 [2024-12-16 22:24:51.815743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:45.641 [2024-12-16 22:24:51.815757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.781 ms 00:28:45.641 [2024-12-16 22:24:51.815764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.641 [2024-12-16 22:24:51.815828] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:45.641 [2024-12-16 22:24:51.815863] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:48.993 [2024-12-16 22:24:55.298474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.298543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:48.993 [2024-12-16 22:24:55.298559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3482.632 ms 00:28:48.993 [2024-12-16 22:24:55.298567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.308781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.308818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:48.993 [2024-12-16 22:24:55.308829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.133 ms 00:28:48.993 [2024-12-16 22:24:55.308848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.308891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.308898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:48.993 [2024-12-16 22:24:55.308907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:48.993 [2024-12-16 22:24:55.308913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.318881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.318911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:48.993 [2024-12-16 22:24:55.318921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.913 ms 00:28:48.993 [2024-12-16 22:24:55.318930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.318958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.318964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:48.993 [2024-12-16 22:24:55.318973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:48.993 [2024-12-16 22:24:55.318980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.319386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.319402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:48.993 [2024-12-16 22:24:55.319411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:28:48.993 [2024-12-16 22:24:55.319419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.319458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.319465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:48.993 [2024-12-16 22:24:55.319473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:48.993 [2024-12-16 22:24:55.319483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:48.993 [2024-12-16 22:24:55.325982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:48.993 [2024-12-16 22:24:55.326007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:48.993 [2024-12-16 22:24:55.326016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.483 ms 00:28:48.993 [2024-12-16 22:24:55.326023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.251 [2024-12-16 22:24:55.344304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:49.251 [2024-12-16 22:24:55.345483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.251 [2024-12-16 22:24:55.345702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:49.251 [2024-12-16 22:24:55.345726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.404 ms 00:28:49.251 [2024-12-16 22:24:55.345740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.251 [2024-12-16 22:24:55.362854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.251 [2024-12-16 22:24:55.362885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:49.251 [2024-12-16 22:24:55.362898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.069 ms 00:28:49.251 [2024-12-16 22:24:55.362909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.251 [2024-12-16 22:24:55.362981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.251 [2024-12-16 22:24:55.362991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:49.251 [2024-12-16 22:24:55.362997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:49.251 [2024-12-16 22:24:55.363005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.251 [2024-12-16 22:24:55.366009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.251 [2024-12-16 22:24:55.366038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:49.251 [2024-12-16 22:24:55.366048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.982 ms 00:28:49.251 [2024-12-16 22:24:55.366056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.251 [2024-12-16 22:24:55.369168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.251 [2024-12-16 22:24:55.369196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:49.252 [2024-12-16 22:24:55.369203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.085 ms 00:28:49.252 [2024-12-16 22:24:55.369211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.369439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.369455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:49.252 [2024-12-16 22:24:55.369462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:28:49.252 [2024-12-16 22:24:55.369471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.402090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.402120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:49.252 [2024-12-16 22:24:55.402132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.603 ms 00:28:49.252 [2024-12-16 22:24:55.402140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.406830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.406868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:49.252 [2024-12-16 22:24:55.406876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.655 ms 00:28:49.252 [2024-12-16 22:24:55.406888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.410304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.410331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:49.252 [2024-12-16 22:24:55.410341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.389 ms 00:28:49.252 [2024-12-16 22:24:55.410348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.414455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.414485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:49.252 [2024-12-16 22:24:55.414492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.082 ms 00:28:49.252 [2024-12-16 22:24:55.414501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.414547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.414562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:49.252 [2024-12-16 22:24:55.414569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:49.252 [2024-12-16 22:24:55.414577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.414631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:49.252 [2024-12-16 22:24:55.414641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:49.252 [2024-12-16 22:24:55.414648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:49.252 [2024-12-16 22:24:55.414659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:49.252 [2024-12-16 22:24:55.415470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3619.720 ms, result 0 00:28:49.252 { 00:28:49.252 "name": "ftl", 00:28:49.252 "uuid": "22c2e225-9ea6-4375-a243-b0b5e29a1b4f" 00:28:49.252 } 00:28:49.252 22:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:49.510 [2024-12-16 22:24:55.620597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.510 22:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:49.510 22:24:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:49.768 [2024-12-16 22:24:56.016863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:49.768 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:50.027 [2024-12-16 22:24:56.217170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:50.027 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:50.286 Fill FTL, iteration 1 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95660 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95660 /var/tmp/spdk.tgt.sock 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95660 ']' 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:50.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.286 22:24:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.545 [2024-12-16 22:24:56.633917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:50.545 [2024-12-16 22:24:56.634536] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95660 ] 00:28:50.545 [2024-12-16 22:24:56.789131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.545 [2024-12-16 22:24:56.808489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.481 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.481 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:51.481 22:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:51.481 ftln1 00:28:51.481 22:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:51.481 22:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95660 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95660 ']' 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95660 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95660 00:28:51.740 killing process with pid 95660 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95660' 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95660 00:28:51.740 22:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95660 00:28:51.999 22:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:51.999 22:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:51.999 [2024-12-16 22:24:58.257284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:51.999 [2024-12-16 22:24:58.257401] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95691 ] 00:28:52.257 [2024-12-16 22:24:58.414321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.257 [2024-12-16 22:24:58.433209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.642  [2024-12-16T22:25:00.931Z] Copying: 183/1024 [MB] (183 MBps) [2024-12-16T22:25:01.873Z] Copying: 405/1024 [MB] (222 MBps) [2024-12-16T22:25:02.814Z] Copying: 660/1024 [MB] (255 MBps) [2024-12-16T22:25:03.074Z] Copying: 910/1024 [MB] (250 MBps) [2024-12-16T22:25:03.336Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:28:56.989 00:28:56.989 Calculate MD5 checksum, iteration 1 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:56.989 22:25:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:56.989 [2024-12-16 22:25:03.262217] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:56.989 [2024-12-16 22:25:03.262338] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95744 ] 00:28:57.251 [2024-12-16 22:25:03.413852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.251 [2024-12-16 22:25:03.436252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.637  [2024-12-16T22:25:05.245Z] Copying: 653/1024 [MB] (653 MBps) [2024-12-16T22:25:05.504Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:28:59.157 00:28:59.157 22:25:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:59.157 22:25:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:01.701 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:01.702 Fill FTL, iteration 2 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f1ac2a74d2bbf88112276f95b424c6d3 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:01.702 22:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:01.702 [2024-12-16 22:25:07.484333] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:01.702 [2024-12-16 22:25:07.484447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95795 ] 00:29:01.702 [2024-12-16 22:25:07.636739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.702 [2024-12-16 22:25:07.653258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.643  [2024-12-16T22:25:09.932Z] Copying: 258/1024 [MB] (258 MBps) [2024-12-16T22:25:10.871Z] Copying: 503/1024 [MB] (245 MBps) [2024-12-16T22:25:12.253Z] Copying: 761/1024 [MB] (258 MBps) [2024-12-16T22:25:12.253Z] Copying: 1019/1024 [MB] (258 MBps) [2024-12-16T22:25:12.253Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:29:05.906 00:29:05.906 Calculate MD5 checksum, iteration 2 00:29:05.906 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:05.906 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:05.906 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:05.906 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:05.907 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:05.907 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:05.907 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:05.907 22:25:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:05.907 [2024-12-16 22:25:12.067155] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:05.907 [2024-12-16 22:25:12.067424] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95844 ] 00:29:05.907 [2024-12-16 22:25:12.222224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.907 [2024-12-16 22:25:12.246757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.285  [2024-12-16T22:25:14.205Z] Copying: 675/1024 [MB] (675 MBps) [2024-12-16T22:25:16.745Z] Copying: 1024/1024 [MB] (average 666 MBps) 00:29:10.398 00:29:10.398 22:25:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:10.398 22:25:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=92b53af91831033fa4227f8a67d44446 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:12.351 [2024-12-16 22:25:18.447818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.351 [2024-12-16 22:25:18.447876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:12.351 [2024-12-16 22:25:18.447889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:12.351 [2024-12-16 22:25:18.447898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.351 [2024-12-16 22:25:18.447916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.351 [2024-12-16 22:25:18.447923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:12.351 [2024-12-16 22:25:18.447930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:12.351 [2024-12-16 22:25:18.447937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.351 [2024-12-16 22:25:18.447953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.351 [2024-12-16 22:25:18.447959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:12.351 [2024-12-16 22:25:18.447972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:12.351 [2024-12-16 22:25:18.447977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.351 [2024-12-16 22:25:18.448032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.200 ms, result 0 00:29:12.351 true 00:29:12.351 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:12.351 { 00:29:12.351 "name": "ftl", 00:29:12.351 "properties": [ 00:29:12.351 { 00:29:12.351 "name": "superblock_version", 00:29:12.351 "value": 5, 00:29:12.351 "read-only": true 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "name": "base_device", 00:29:12.351 "bands": [ 00:29:12.351 { 00:29:12.351 "id": 0, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 1, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 2, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 3, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 4, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 5, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 6, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 7, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 8, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 9, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 10, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 11, 00:29:12.351 "state": "FREE", 00:29:12.351 "validity": 0.0 00:29:12.351 }, 00:29:12.351 { 00:29:12.351 "id": 12, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 13, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 14, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 15, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 16, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 17, 00:29:12.352 "state": "FREE", 00:29:12.352 "validity": 0.0 00:29:12.352 } 00:29:12.352 ], 00:29:12.352 "read-only": true 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "name": "cache_device", 00:29:12.352 "type": "bdev", 00:29:12.352 "chunks": [ 00:29:12.352 { 00:29:12.352 "id": 0, 00:29:12.352 "state": "INACTIVE", 00:29:12.352 "utilization": 0.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 1, 00:29:12.352 "state": "CLOSED", 00:29:12.352 "utilization": 1.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 2, 00:29:12.352 "state": "CLOSED", 00:29:12.352 "utilization": 1.0 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 3, 00:29:12.352 "state": "OPEN", 00:29:12.352 "utilization": 0.001953125 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "id": 4, 00:29:12.352 "state": "OPEN", 00:29:12.352 "utilization": 0.0 00:29:12.352 } 00:29:12.352 ], 00:29:12.352 "read-only": true 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "name": "verbose_mode", 00:29:12.352 "value": true, 00:29:12.352 "unit": "", 00:29:12.352 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:12.352 }, 00:29:12.352 { 00:29:12.352 "name": "prep_upgrade_on_shutdown", 00:29:12.352 "value": false, 00:29:12.352 "unit": "", 00:29:12.352 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:12.352 } 00:29:12.352 ] 00:29:12.352 } 00:29:12.352 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:12.610 [2024-12-16 22:25:18.856163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.610 [2024-12-16 22:25:18.856295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:12.610 [2024-12-16 22:25:18.856342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:12.610 [2024-12-16 22:25:18.856359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.610 [2024-12-16 22:25:18.856390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.610 [2024-12-16 22:25:18.856407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:12.610 [2024-12-16 22:25:18.856422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:12.610 [2024-12-16 22:25:18.856437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.610 [2024-12-16 22:25:18.856461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.610 [2024-12-16 22:25:18.856477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:12.610 [2024-12-16 22:25:18.856493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:12.610 [2024-12-16 22:25:18.856541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.610 [2024-12-16 22:25:18.856602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.427 ms, result 0 00:29:12.610 true 00:29:12.610 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:12.610 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:12.610 22:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:12.868 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:12.868 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:12.868 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:13.127 [2024-12-16 22:25:19.280528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.127 [2024-12-16 22:25:19.280557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:13.127 [2024-12-16 22:25:19.280565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:13.127 [2024-12-16 22:25:19.280570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.127 [2024-12-16 22:25:19.280587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.127 [2024-12-16 22:25:19.280593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:13.127 [2024-12-16 22:25:19.280599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.127 [2024-12-16 22:25:19.280604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.127 [2024-12-16 22:25:19.280618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.127 [2024-12-16 22:25:19.280624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:13.127 [2024-12-16 22:25:19.280629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:13.127 [2024-12-16 22:25:19.280634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.127 [2024-12-16 22:25:19.280673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.135 ms, result 0 00:29:13.127 true 00:29:13.127 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:13.386 { 00:29:13.386 "name": "ftl", 00:29:13.386 "properties": [ 00:29:13.386 { 00:29:13.386 "name": "superblock_version", 00:29:13.386 "value": 5, 00:29:13.386 "read-only": true 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "name": "base_device", 00:29:13.386 "bands": [ 00:29:13.386 { 00:29:13.386 "id": 0, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 1, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 2, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 3, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 4, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 5, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 6, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 7, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 8, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 9, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 10, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 11, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 12, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 13, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 14, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 15, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 16, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 17, 00:29:13.386 "state": "FREE", 00:29:13.386 "validity": 0.0 00:29:13.386 } 00:29:13.386 ], 00:29:13.386 "read-only": true 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "name": "cache_device", 00:29:13.386 "type": "bdev", 00:29:13.386 "chunks": [ 00:29:13.386 { 00:29:13.386 "id": 0, 00:29:13.386 "state": "INACTIVE", 00:29:13.386 "utilization": 0.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 1, 00:29:13.386 "state": "CLOSED", 00:29:13.386 "utilization": 1.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 2, 00:29:13.386 "state": "CLOSED", 00:29:13.386 "utilization": 1.0 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 3, 00:29:13.386 "state": "OPEN", 00:29:13.386 "utilization": 0.001953125 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "id": 4, 00:29:13.386 "state": "OPEN", 00:29:13.386 "utilization": 0.0 00:29:13.386 } 00:29:13.386 ], 00:29:13.386 "read-only": true 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "name": "verbose_mode", 00:29:13.386 "value": true, 00:29:13.386 "unit": "", 00:29:13.386 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:13.386 }, 00:29:13.386 { 00:29:13.386 "name": "prep_upgrade_on_shutdown", 00:29:13.386 "value": true, 00:29:13.386 "unit": "", 00:29:13.386 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:13.386 } 00:29:13.386 ] 00:29:13.386 } 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95542 ]] 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95542 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95542 ']' 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95542 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95542 00:29:13.386 killing process with pid 95542 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95542' 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95542 00:29:13.386 22:25:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95542 00:29:13.386 [2024-12-16 22:25:19.650066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:13.386 [2024-12-16 22:25:19.654204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.386 [2024-12-16 22:25:19.654235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:13.386 [2024-12-16 22:25:19.654246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:13.386 [2024-12-16 22:25:19.654253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.386 [2024-12-16 22:25:19.654272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:13.386 [2024-12-16 22:25:19.654798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.386 [2024-12-16 22:25:19.654819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:13.386 [2024-12-16 22:25:19.654827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:29:13.386 [2024-12-16 22:25:19.654833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.796736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.796785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:21.528 [2024-12-16 22:25:27.796797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8141.846 ms 00:29:21.528 [2024-12-16 22:25:27.796804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.797891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.798002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:21.528 [2024-12-16 22:25:27.798014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.075 ms 00:29:21.528 [2024-12-16 22:25:27.798026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.798889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.798907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:21.528 [2024-12-16 22:25:27.798914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:29:21.528 [2024-12-16 22:25:27.798921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.800317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.800345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:21.528 [2024-12-16 22:25:27.800353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.373 ms 00:29:21.528 [2024-12-16 22:25:27.800358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.802123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.802150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:21.528 [2024-12-16 22:25:27.802157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.742 ms 00:29:21.528 [2024-12-16 22:25:27.802168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.802220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.802227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:21.528 [2024-12-16 22:25:27.802241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:21.528 [2024-12-16 22:25:27.802246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.803301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.803328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:21.528 [2024-12-16 22:25:27.803334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:29:21.528 [2024-12-16 22:25:27.803339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.804215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.804240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:21.528 [2024-12-16 22:25:27.804247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.853 ms 00:29:21.528 [2024-12-16 22:25:27.804253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.805168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.805271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:21.528 [2024-12-16 22:25:27.805283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.893 ms 00:29:21.528 [2024-12-16 22:25:27.805289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.806248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.528 [2024-12-16 22:25:27.806270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:21.528 [2024-12-16 22:25:27.806277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:29:21.528 [2024-12-16 22:25:27.806282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.528 [2024-12-16 22:25:27.806304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:21.528 [2024-12-16 22:25:27.806315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:21.528 [2024-12-16 22:25:27.806323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:21.528 [2024-12-16 22:25:27.806329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:21.529 [2024-12-16 22:25:27.806335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.529 [2024-12-16 22:25:27.806424] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:21.529 [2024-12-16 22:25:27.806429] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 22c2e225-9ea6-4375-a243-b0b5e29a1b4f 00:29:21.529 [2024-12-16 22:25:27.806435] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:21.529 [2024-12-16 22:25:27.806444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:21.529 [2024-12-16 22:25:27.806449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:21.529 [2024-12-16 22:25:27.806455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:21.529 [2024-12-16 22:25:27.806461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:21.529 [2024-12-16 22:25:27.806468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:21.529 [2024-12-16 22:25:27.806474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:21.529 [2024-12-16 22:25:27.806479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:21.529 [2024-12-16 22:25:27.806484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:21.529 [2024-12-16 22:25:27.806490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.529 [2024-12-16 22:25:27.806496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:21.529 [2024-12-16 22:25:27.806502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:29:21.529 [2024-12-16 22:25:27.806508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.807802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.529 [2024-12-16 22:25:27.807820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:21.529 [2024-12-16 22:25:27.807827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.283 ms 00:29:21.529 [2024-12-16 22:25:27.807833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.807904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:21.529 [2024-12-16 22:25:27.807911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:21.529 [2024-12-16 22:25:27.807917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:29:21.529 [2024-12-16 22:25:27.807923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.812279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.812372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:21.529 [2024-12-16 22:25:27.812438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.812457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.812488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.812505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:21.529 [2024-12-16 22:25:27.812553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.812570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.812620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.812644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:21.529 [2024-12-16 22:25:27.812659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.812702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.812727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.812822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:21.529 [2024-12-16 22:25:27.812876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.812895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.820831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.820962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:21.529 [2024-12-16 22:25:27.821005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.821022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.827230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.827337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:21.529 [2024-12-16 22:25:27.827382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.827401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.827468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.827527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:21.529 [2024-12-16 22:25:27.827545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.827560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.827596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.827613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:21.529 [2024-12-16 22:25:27.827711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.827729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.827798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.827937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:21.529 [2024-12-16 22:25:27.827959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.827976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.828081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.828093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:21.529 [2024-12-16 22:25:27.828100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.828106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.828134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.828141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:21.529 [2024-12-16 22:25:27.828150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.828156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.828190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:21.529 [2024-12-16 22:25:27.828198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:21.529 [2024-12-16 22:25:27.828204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:21.529 [2024-12-16 22:25:27.828209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:21.529 [2024-12-16 22:25:27.828303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8174.054 ms, result 0 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96037 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96037 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96037 ']' 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.837 22:25:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.837 [2024-12-16 22:25:31.035676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:24.837 [2024-12-16 22:25:31.035959] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96037 ] 00:29:25.099 [2024-12-16 22:25:31.191941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.099 [2024-12-16 22:25:31.211367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.362 [2024-12-16 22:25:31.460610] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:25.362 [2024-12-16 22:25:31.460827] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:25.362 [2024-12-16 22:25:31.602185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.362 [2024-12-16 22:25:31.602305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:25.362 [2024-12-16 22:25:31.602358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:25.362 [2024-12-16 22:25:31.602382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.362 [2024-12-16 22:25:31.602438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.362 [2024-12-16 22:25:31.602459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:25.362 [2024-12-16 22:25:31.602476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:25.362 [2024-12-16 22:25:31.602490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.362 [2024-12-16 22:25:31.602519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:25.362 [2024-12-16 22:25:31.602725] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:25.362 [2024-12-16 22:25:31.602845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.362 [2024-12-16 22:25:31.602864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:25.362 [2024-12-16 22:25:31.602880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:29:25.362 [2024-12-16 22:25:31.602894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.362 [2024-12-16 22:25:31.603801] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:25.362 [2024-12-16 22:25:31.605800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.362 [2024-12-16 22:25:31.605908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:25.362 [2024-12-16 22:25:31.605921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.000 ms 00:29:25.362 [2024-12-16 22:25:31.605927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.362 [2024-12-16 22:25:31.605965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.362 [2024-12-16 22:25:31.605973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:25.362 [2024-12-16 22:25:31.605980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:25.362 [2024-12-16 22:25:31.605989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.362 [2024-12-16 22:25:31.610185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.610210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:25.363 [2024-12-16 22:25:31.610217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.164 ms 00:29:25.363 [2024-12-16 22:25:31.610223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.610262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.610269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:25.363 [2024-12-16 22:25:31.610276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:25.363 [2024-12-16 22:25:31.610281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.610310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.610320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:25.363 [2024-12-16 22:25:31.610329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:25.363 [2024-12-16 22:25:31.610335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.610351] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:25.363 [2024-12-16 22:25:31.611489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.611513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:25.363 [2024-12-16 22:25:31.611521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.142 ms 00:29:25.363 [2024-12-16 22:25:31.611531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.611553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.611559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:25.363 [2024-12-16 22:25:31.611565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:25.363 [2024-12-16 22:25:31.611571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.611587] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:25.363 [2024-12-16 22:25:31.611603] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:25.363 [2024-12-16 22:25:31.611629] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:25.363 [2024-12-16 22:25:31.611642] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:25.363 [2024-12-16 22:25:31.611723] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:25.363 [2024-12-16 22:25:31.611733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:25.363 [2024-12-16 22:25:31.611741] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:25.363 [2024-12-16 22:25:31.611749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:25.363 [2024-12-16 22:25:31.611755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:25.363 [2024-12-16 22:25:31.611761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:25.363 [2024-12-16 22:25:31.611766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:25.363 [2024-12-16 22:25:31.611774] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:25.363 [2024-12-16 22:25:31.611780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:25.363 [2024-12-16 22:25:31.611788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.611795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:25.363 [2024-12-16 22:25:31.611801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:29:25.363 [2024-12-16 22:25:31.611807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.611887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.363 [2024-12-16 22:25:31.611894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:25.363 [2024-12-16 22:25:31.611903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:25.363 [2024-12-16 22:25:31.611908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.363 [2024-12-16 22:25:31.611989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:25.363 [2024-12-16 22:25:31.611997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:25.363 [2024-12-16 22:25:31.612005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:25.363 [2024-12-16 22:25:31.612022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:25.363 [2024-12-16 22:25:31.612032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:25.363 [2024-12-16 22:25:31.612037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:25.363 [2024-12-16 22:25:31.612041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:25.363 [2024-12-16 22:25:31.612051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:25.363 [2024-12-16 22:25:31.612056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:25.363 [2024-12-16 22:25:31.612073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:25.363 [2024-12-16 22:25:31.612078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:25.363 [2024-12-16 22:25:31.612091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:25.363 [2024-12-16 22:25:31.612095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:25.363 [2024-12-16 22:25:31.612105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:25.363 [2024-12-16 22:25:31.612119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:25.363 [2024-12-16 22:25:31.612134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:25.363 [2024-12-16 22:25:31.612148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:25.363 [2024-12-16 22:25:31.612165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:25.363 [2024-12-16 22:25:31.612179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:25.363 [2024-12-16 22:25:31.612193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:25.363 [2024-12-16 22:25:31.612208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:25.363 [2024-12-16 22:25:31.612213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:25.363 [2024-12-16 22:25:31.612227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:25.363 [2024-12-16 22:25:31.612232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.363 [2024-12-16 22:25:31.612243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:25.363 [2024-12-16 22:25:31.612249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:25.363 [2024-12-16 22:25:31.612254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:25.363 [2024-12-16 22:25:31.612259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:25.363 [2024-12-16 22:25:31.612264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:25.363 [2024-12-16 22:25:31.612269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:25.363 [2024-12-16 22:25:31.612275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:25.363 [2024-12-16 22:25:31.612282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.363 [2024-12-16 22:25:31.612288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:25.363 [2024-12-16 22:25:31.612293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:25.363 [2024-12-16 22:25:31.612299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:25.363 [2024-12-16 22:25:31.612304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:25.363 [2024-12-16 22:25:31.612309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:25.363 [2024-12-16 22:25:31.612314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:25.363 [2024-12-16 22:25:31.612319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:25.364 [2024-12-16 22:25:31.612325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:25.364 [2024-12-16 22:25:31.612364] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:25.364 [2024-12-16 22:25:31.612370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:25.364 [2024-12-16 22:25:31.612381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:25.364 [2024-12-16 22:25:31.612386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:25.364 [2024-12-16 22:25:31.612391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:25.364 [2024-12-16 22:25:31.612396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.364 [2024-12-16 22:25:31.612405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:25.364 [2024-12-16 22:25:31.612410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:29:25.364 [2024-12-16 22:25:31.612415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.364 [2024-12-16 22:25:31.612444] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:25.364 [2024-12-16 22:25:31.612455] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:28.666 [2024-12-16 22:25:34.970349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.970419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:28.666 [2024-12-16 22:25:34.970436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3357.892 ms 00:29:28.666 [2024-12-16 22:25:34.970456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.980118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.980161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:28.666 [2024-12-16 22:25:34.980173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.549 ms 00:29:28.666 [2024-12-16 22:25:34.980182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.980224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.980233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:28.666 [2024-12-16 22:25:34.980242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:28.666 [2024-12-16 22:25:34.980254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.989971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.990146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:28.666 [2024-12-16 22:25:34.990163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.666 ms 00:29:28.666 [2024-12-16 22:25:34.990172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.990203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.990211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:28.666 [2024-12-16 22:25:34.990224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.666 [2024-12-16 22:25:34.990236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.990674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.990693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:28.666 [2024-12-16 22:25:34.990704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:29:28.666 [2024-12-16 22:25:34.990713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.990776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.990787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:28.666 [2024-12-16 22:25:34.990796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:28.666 [2024-12-16 22:25:34.990804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:34.997256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.666 [2024-12-16 22:25:34.997391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:28.666 [2024-12-16 22:25:34.997407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.425 ms 00:29:28.666 [2024-12-16 22:25:34.997415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.666 [2024-12-16 22:25:35.011573] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:28.667 [2024-12-16 22:25:35.011623] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:28.667 [2024-12-16 22:25:35.011642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.667 [2024-12-16 22:25:35.011652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:28.667 [2024-12-16 22:25:35.011662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.108 ms 00:29:28.667 [2024-12-16 22:25:35.011670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.016247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.016294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:28.932 [2024-12-16 22:25:35.016307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.528 ms 00:29:28.932 [2024-12-16 22:25:35.016317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.018600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.018640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:28.932 [2024-12-16 22:25:35.018651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.230 ms 00:29:28.932 [2024-12-16 22:25:35.018660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.020872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.020911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:28.932 [2024-12-16 22:25:35.020922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.165 ms 00:29:28.932 [2024-12-16 22:25:35.020931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.021370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.021393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:28.932 [2024-12-16 22:25:35.021409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:29:28.932 [2024-12-16 22:25:35.021418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.041235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.041288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:28.932 [2024-12-16 22:25:35.041301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.790 ms 00:29:28.932 [2024-12-16 22:25:35.041309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.049153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:28.932 [2024-12-16 22:25:35.050044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.050202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:28.932 [2024-12-16 22:25:35.050220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.677 ms 00:29:28.932 [2024-12-16 22:25:35.050228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.050288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.050298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:28.932 [2024-12-16 22:25:35.050307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:28.932 [2024-12-16 22:25:35.050315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.050381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.050392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:28.932 [2024-12-16 22:25:35.050408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:28.932 [2024-12-16 22:25:35.050416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.050437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.050446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:28.932 [2024-12-16 22:25:35.050454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:28.932 [2024-12-16 22:25:35.050468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.050502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:28.932 [2024-12-16 22:25:35.050512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.050520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:28.932 [2024-12-16 22:25:35.050528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:28.932 [2024-12-16 22:25:35.050538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.054179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.054210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:28.932 [2024-12-16 22:25:35.054220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.602 ms 00:29:28.932 [2024-12-16 22:25:35.054227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.054293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.932 [2024-12-16 22:25:35.054303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:28.932 [2024-12-16 22:25:35.054311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:28.932 [2024-12-16 22:25:35.054319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.932 [2024-12-16 22:25:35.055280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3452.695 ms, result 0 00:29:28.932 [2024-12-16 22:25:35.070814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.932 [2024-12-16 22:25:35.086800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:28.932 [2024-12-16 22:25:35.094911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:29.194 22:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.194 22:25:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:29.194 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:29.194 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:29.194 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:29.453 [2024-12-16 22:25:35.571379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.453 [2024-12-16 22:25:35.571428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:29.453 [2024-12-16 22:25:35.571442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:29.453 [2024-12-16 22:25:35.571451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.453 [2024-12-16 22:25:35.571474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.453 [2024-12-16 22:25:35.571483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:29.453 [2024-12-16 22:25:35.571499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:29.453 [2024-12-16 22:25:35.571510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.453 [2024-12-16 22:25:35.571530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:29.453 [2024-12-16 22:25:35.571538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:29.453 [2024-12-16 22:25:35.571546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:29.453 [2024-12-16 22:25:35.571554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:29.453 [2024-12-16 22:25:35.571614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:29:29.453 true 00:29:29.453 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:29.453 { 00:29:29.453 "name": "ftl", 00:29:29.453 "properties": [ 00:29:29.453 { 00:29:29.453 "name": "superblock_version", 00:29:29.453 "value": 5, 00:29:29.453 "read-only": true 00:29:29.453 }, 00:29:29.453 { 00:29:29.453 "name": "base_device", 00:29:29.453 "bands": [ 00:29:29.453 { 00:29:29.453 "id": 0, 00:29:29.453 "state": "CLOSED", 00:29:29.453 "validity": 1.0 00:29:29.453 }, 00:29:29.453 { 00:29:29.453 "id": 1, 00:29:29.453 "state": "CLOSED", 00:29:29.453 "validity": 1.0 00:29:29.453 }, 00:29:29.453 { 00:29:29.453 "id": 2, 00:29:29.453 "state": "CLOSED", 00:29:29.453 "validity": 0.007843137254901933 00:29:29.453 }, 00:29:29.453 { 00:29:29.453 "id": 3, 00:29:29.453 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 4, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 5, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 6, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 7, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 8, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 9, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 10, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 11, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 12, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 13, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 14, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 15, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 16, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 17, 00:29:29.454 "state": "FREE", 00:29:29.454 "validity": 0.0 00:29:29.454 } 00:29:29.454 ], 00:29:29.454 "read-only": true 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "name": "cache_device", 00:29:29.454 "type": "bdev", 00:29:29.454 "chunks": [ 00:29:29.454 { 00:29:29.454 "id": 0, 00:29:29.454 "state": "INACTIVE", 00:29:29.454 "utilization": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 1, 00:29:29.454 "state": "OPEN", 00:29:29.454 "utilization": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 2, 00:29:29.454 "state": "OPEN", 00:29:29.454 "utilization": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 3, 00:29:29.454 "state": "FREE", 00:29:29.454 "utilization": 0.0 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "id": 4, 00:29:29.454 "state": "FREE", 00:29:29.454 "utilization": 0.0 00:29:29.454 } 00:29:29.454 ], 00:29:29.454 "read-only": true 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "name": "verbose_mode", 00:29:29.454 "value": true, 00:29:29.454 "unit": "", 00:29:29.454 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:29.454 }, 00:29:29.454 { 00:29:29.454 "name": "prep_upgrade_on_shutdown", 00:29:29.454 "value": false, 00:29:29.454 "unit": "", 00:29:29.454 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:29.454 } 00:29:29.454 ] 00:29:29.454 } 00:29:29.454 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:29.454 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:29.454 22:25:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:29.713 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:29.713 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:29.713 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:29.713 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:29.713 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:29.974 Validate MD5 checksum, iteration 1 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:29.974 22:25:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:29.974 [2024-12-16 22:25:36.290589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:29.974 [2024-12-16 22:25:36.291467] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96106 ] 00:29:30.235 [2024-12-16 22:25:36.453374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.235 [2024-12-16 22:25:36.482392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.622  [2024-12-16T22:25:38.914Z] Copying: 592/1024 [MB] (592 MBps) [2024-12-16T22:25:39.487Z] Copying: 1024/1024 [MB] (average 591 MBps) 00:29:33.140 00:29:33.140 22:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:33.140 22:25:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:35.687 Validate MD5 checksum, iteration 2 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1ac2a74d2bbf88112276f95b424c6d3 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1ac2a74d2bbf88112276f95b424c6d3 != \f\1\a\c\2\a\7\4\d\2\b\b\f\8\8\1\1\2\2\7\6\f\9\5\b\4\2\4\c\6\d\3 ]] 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:35.687 22:25:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:35.687 [2024-12-16 22:25:41.551506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:35.687 [2024-12-16 22:25:41.551622] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96163 ] 00:29:35.687 [2024-12-16 22:25:41.708725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.687 [2024-12-16 22:25:41.726942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.074  [2024-12-16T22:25:43.993Z] Copying: 604/1024 [MB] (604 MBps) [2024-12-16T22:25:44.564Z] Copying: 1024/1024 [MB] (average 594 MBps) 00:29:38.217 00:29:38.217 22:25:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:38.217 22:25:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=92b53af91831033fa4227f8a67d44446 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 92b53af91831033fa4227f8a67d44446 != \9\2\b\5\3\a\f\9\1\8\3\1\0\3\3\f\a\4\2\2\7\f\8\a\6\7\d\4\4\4\4\6 ]] 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96037 ]] 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96037 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96223 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96223 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96223 ']' 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.767 22:25:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.767 [2024-12-16 22:25:46.575060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:40.767 [2024-12-16 22:25:46.575153] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96223 ] 00:29:40.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 96037 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:40.767 [2024-12-16 22:25:46.728720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.767 [2024-12-16 22:25:46.756998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.767 [2024-12-16 22:25:47.095343] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:40.767 [2024-12-16 22:25:47.095423] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:41.060 [2024-12-16 22:25:47.248252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.060 [2024-12-16 22:25:47.248315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:41.060 [2024-12-16 22:25:47.248336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:41.060 [2024-12-16 22:25:47.248345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.060 [2024-12-16 22:25:47.248408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.060 [2024-12-16 22:25:47.248420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:41.060 [2024-12-16 22:25:47.248432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:29:41.060 [2024-12-16 22:25:47.248445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.248472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:41.061 [2024-12-16 22:25:47.248816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:41.061 [2024-12-16 22:25:47.248869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.248879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:41.061 [2024-12-16 22:25:47.248898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.407 ms 00:29:41.061 [2024-12-16 22:25:47.248907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.249370] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:41.061 [2024-12-16 22:25:47.256055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.256125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:41.061 [2024-12-16 22:25:47.256145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.690 ms 00:29:41.061 [2024-12-16 22:25:47.256155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.257945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.257995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:41.061 [2024-12-16 22:25:47.258008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:41.061 [2024-12-16 22:25:47.258021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.258362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.258392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:41.061 [2024-12-16 22:25:47.258407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:29:41.061 [2024-12-16 22:25:47.258416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.258463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.258474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:41.061 [2024-12-16 22:25:47.258482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:41.061 [2024-12-16 22:25:47.258491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.258524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.258542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:41.061 [2024-12-16 22:25:47.258550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:41.061 [2024-12-16 22:25:47.258559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.258597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:41.061 [2024-12-16 22:25:47.259985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.260026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:41.061 [2024-12-16 22:25:47.260038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.394 ms 00:29:41.061 [2024-12-16 22:25:47.260047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.260095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.260109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:41.061 [2024-12-16 22:25:47.260119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:41.061 [2024-12-16 22:25:47.260127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.260152] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:41.061 [2024-12-16 22:25:47.260179] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:41.061 [2024-12-16 22:25:47.260216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:41.061 [2024-12-16 22:25:47.260248] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:41.061 [2024-12-16 22:25:47.260362] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:41.061 [2024-12-16 22:25:47.260383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:41.061 [2024-12-16 22:25:47.260395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:41.061 [2024-12-16 22:25:47.260408] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:41.061 [2024-12-16 22:25:47.260417] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:41.061 [2024-12-16 22:25:47.260427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:41.061 [2024-12-16 22:25:47.260435] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:41.061 [2024-12-16 22:25:47.260444] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:41.061 [2024-12-16 22:25:47.260451] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:41.061 [2024-12-16 22:25:47.260460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.260468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:41.061 [2024-12-16 22:25:47.260482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:29:41.061 [2024-12-16 22:25:47.260490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.260576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.061 [2024-12-16 22:25:47.260597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:41.061 [2024-12-16 22:25:47.260606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:41.061 [2024-12-16 22:25:47.260616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.061 [2024-12-16 22:25:47.260729] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:41.061 [2024-12-16 22:25:47.260744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:41.061 [2024-12-16 22:25:47.260755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.061 [2024-12-16 22:25:47.260768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:41.061 [2024-12-16 22:25:47.260787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:41.061 [2024-12-16 22:25:47.260806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:41.061 [2024-12-16 22:25:47.260815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:41.061 [2024-12-16 22:25:47.260823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:41.061 [2024-12-16 22:25:47.260867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:41.061 [2024-12-16 22:25:47.260876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:41.061 [2024-12-16 22:25:47.260901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:41.061 [2024-12-16 22:25:47.260908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:41.061 [2024-12-16 22:25:47.260924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:41.061 [2024-12-16 22:25:47.260933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.260941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:41.061 [2024-12-16 22:25:47.260949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:41.061 [2024-12-16 22:25:47.260956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.061 [2024-12-16 22:25:47.260964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:41.061 [2024-12-16 22:25:47.260973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:41.061 [2024-12-16 22:25:47.260982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.061 [2024-12-16 22:25:47.260990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:41.061 [2024-12-16 22:25:47.261005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:41.061 [2024-12-16 22:25:47.261012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.061 [2024-12-16 22:25:47.261020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:41.061 [2024-12-16 22:25:47.261032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:41.061 [2024-12-16 22:25:47.261040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:41.061 [2024-12-16 22:25:47.261048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:41.061 [2024-12-16 22:25:47.261055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:41.061 [2024-12-16 22:25:47.261062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.261070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:41.061 [2024-12-16 22:25:47.261077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:41.061 [2024-12-16 22:25:47.261083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.261090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:41.061 [2024-12-16 22:25:47.261097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:41.061 [2024-12-16 22:25:47.261103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.261111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:41.061 [2024-12-16 22:25:47.261117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:41.061 [2024-12-16 22:25:47.261124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.061 [2024-12-16 22:25:47.261130] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:41.061 [2024-12-16 22:25:47.261138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:41.062 [2024-12-16 22:25:47.261157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:41.062 [2024-12-16 22:25:47.261169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:41.062 [2024-12-16 22:25:47.261178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:41.062 [2024-12-16 22:25:47.261186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:41.062 [2024-12-16 22:25:47.261192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:41.062 [2024-12-16 22:25:47.261199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:41.062 [2024-12-16 22:25:47.261205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:41.062 [2024-12-16 22:25:47.261212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:41.062 [2024-12-16 22:25:47.261221] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:41.062 [2024-12-16 22:25:47.261233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:41.062 [2024-12-16 22:25:47.261249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:41.062 [2024-12-16 22:25:47.261274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:41.062 [2024-12-16 22:25:47.261282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:41.062 [2024-12-16 22:25:47.261291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:41.062 [2024-12-16 22:25:47.261298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:41.062 [2024-12-16 22:25:47.261356] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:41.062 [2024-12-16 22:25:47.261365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:41.062 [2024-12-16 22:25:47.261381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:41.062 [2024-12-16 22:25:47.261389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:41.062 [2024-12-16 22:25:47.261397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:41.062 [2024-12-16 22:25:47.261405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.261416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:41.062 [2024-12-16 22:25:47.261426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.746 ms 00:29:41.062 [2024-12-16 22:25:47.261434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.275996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.276269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:41.062 [2024-12-16 22:25:47.276291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.509 ms 00:29:41.062 [2024-12-16 22:25:47.276308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.276356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.276367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:41.062 [2024-12-16 22:25:47.276377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:41.062 [2024-12-16 22:25:47.276389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.291421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.291589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:41.062 [2024-12-16 22:25:47.291606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.966 ms 00:29:41.062 [2024-12-16 22:25:47.291614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.291651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.291660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:41.062 [2024-12-16 22:25:47.291674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.062 [2024-12-16 22:25:47.291684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.291777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.291791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:41.062 [2024-12-16 22:25:47.291805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:41.062 [2024-12-16 22:25:47.291812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.291870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.291879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:41.062 [2024-12-16 22:25:47.291894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:41.062 [2024-12-16 22:25:47.291904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.300206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.300243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:41.062 [2024-12-16 22:25:47.300253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.280 ms 00:29:41.062 [2024-12-16 22:25:47.300261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.300351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.300363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:41.062 [2024-12-16 22:25:47.300374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:41.062 [2024-12-16 22:25:47.300382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.318037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.318100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:41.062 [2024-12-16 22:25:47.318119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.634 ms 00:29:41.062 [2024-12-16 22:25:47.318132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.319934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.320131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:41.062 [2024-12-16 22:25:47.320165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:29:41.062 [2024-12-16 22:25:47.320177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.340376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.340418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:41.062 [2024-12-16 22:25:47.340430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.141 ms 00:29:41.062 [2024-12-16 22:25:47.340439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.340566] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:41.062 [2024-12-16 22:25:47.340668] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:41.062 [2024-12-16 22:25:47.340762] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:41.062 [2024-12-16 22:25:47.340876] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:41.062 [2024-12-16 22:25:47.340886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.340895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:41.062 [2024-12-16 22:25:47.340906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:29:41.062 [2024-12-16 22:25:47.340918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.340990] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:41.062 [2024-12-16 22:25:47.341006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.341014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:41.062 [2024-12-16 22:25:47.341023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:41.062 [2024-12-16 22:25:47.341034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.344795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.344831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:41.062 [2024-12-16 22:25:47.344866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.739 ms 00:29:41.062 [2024-12-16 22:25:47.344876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.062 [2024-12-16 22:25:47.345585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.062 [2024-12-16 22:25:47.345622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:41.062 [2024-12-16 22:25:47.345634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:41.062 [2024-12-16 22:25:47.345643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.063 [2024-12-16 22:25:47.345717] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:41.063 [2024-12-16 22:25:47.345910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.063 [2024-12-16 22:25:47.345923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:41.063 [2024-12-16 22:25:47.345940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:29:41.063 [2024-12-16 22:25:47.345948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.007 [2024-12-16 22:25:48.098618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.007 [2024-12-16 22:25:48.098847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:42.007 [2024-12-16 22:25:48.098871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 752.332 ms 00:29:42.007 [2024-12-16 22:25:48.098881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.007 [2024-12-16 22:25:48.100639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.007 [2024-12-16 22:25:48.100679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:42.007 [2024-12-16 22:25:48.100696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.293 ms 00:29:42.007 [2024-12-16 22:25:48.100705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.007 [2024-12-16 22:25:48.101200] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:42.007 [2024-12-16 22:25:48.101226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.007 [2024-12-16 22:25:48.101236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:42.007 [2024-12-16 22:25:48.101256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.492 ms 00:29:42.007 [2024-12-16 22:25:48.101265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.007 [2024-12-16 22:25:48.101298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.007 [2024-12-16 22:25:48.101312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:42.007 [2024-12-16 22:25:48.101321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:42.007 [2024-12-16 22:25:48.101329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.007 [2024-12-16 22:25:48.101363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 755.645 ms, result 0 00:29:42.007 [2024-12-16 22:25:48.101403] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:42.007 [2024-12-16 22:25:48.101628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.007 [2024-12-16 22:25:48.101640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:42.007 [2024-12-16 22:25:48.101648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:29:42.007 [2024-12-16 22:25:48.101656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.944536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.944686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:42.952 [2024-12-16 22:25:48.944785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 842.228 ms 00:29:42.952 [2024-12-16 22:25:48.944812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.946357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.946472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:42.952 [2024-12-16 22:25:48.946525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.096 ms 00:29:42.952 [2024-12-16 22:25:48.946547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.946990] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:42.952 [2024-12-16 22:25:48.947105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.947157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:42.952 [2024-12-16 22:25:48.947181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:29:42.952 [2024-12-16 22:25:48.947200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.947279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.947396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:42.952 [2024-12-16 22:25:48.947422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:42.952 [2024-12-16 22:25:48.947441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.947496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 846.082 ms, result 0 00:29:42.952 [2024-12-16 22:25:48.947561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:42.952 [2024-12-16 22:25:48.947697] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:42.952 [2024-12-16 22:25:48.947732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.947752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:42.952 [2024-12-16 22:25:48.947774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1602.032 ms 00:29:42.952 [2024-12-16 22:25:48.947804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.947870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.947968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:42.952 [2024-12-16 22:25:48.947994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:42.952 [2024-12-16 22:25:48.948014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.956234] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:42.952 [2024-12-16 22:25:48.956423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.956459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:42.952 [2024-12-16 22:25:48.956512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.380 ms 00:29:42.952 [2024-12-16 22:25:48.956536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.957230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.957315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:42.952 [2024-12-16 22:25:48.957328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:29:42.952 [2024-12-16 22:25:48.957343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.959579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.959672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:42.952 [2024-12-16 22:25:48.959684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:29:42.952 [2024-12-16 22:25:48.959692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.959729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.959739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:42.952 [2024-12-16 22:25:48.959747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:42.952 [2024-12-16 22:25:48.959754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.959878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.959888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:42.952 [2024-12-16 22:25:48.959900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:42.952 [2024-12-16 22:25:48.959907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.959929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.959937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:42.952 [2024-12-16 22:25:48.959947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:42.952 [2024-12-16 22:25:48.959957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.959989] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:42.952 [2024-12-16 22:25:48.959999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.960008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:42.952 [2024-12-16 22:25:48.960015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:42.952 [2024-12-16 22:25:48.960027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.960077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:42.952 [2024-12-16 22:25:48.960086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:42.952 [2024-12-16 22:25:48.960094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:29:42.952 [2024-12-16 22:25:48.960102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:42.952 [2024-12-16 22:25:48.961081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1712.410 ms, result 0 00:29:42.952 [2024-12-16 22:25:48.976892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.952 [2024-12-16 22:25:48.992891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:42.952 [2024-12-16 22:25:49.001003] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:42.952 Validate MD5 checksum, iteration 1 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:42.952 22:25:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:42.952 [2024-12-16 22:25:49.115547] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:42.953 [2024-12-16 22:25:49.115876] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96253 ] 00:29:42.953 [2024-12-16 22:25:49.276617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.214 [2024-12-16 22:25:49.306110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.598  [2024-12-16T22:25:51.887Z] Copying: 503/1024 [MB] (503 MBps) [2024-12-16T22:25:52.458Z] Copying: 1024/1024 [MB] (average 527 MBps) 00:29:46.111 00:29:46.111 22:25:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:46.111 22:25:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1ac2a74d2bbf88112276f95b424c6d3 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1ac2a74d2bbf88112276f95b424c6d3 != \f\1\a\c\2\a\7\4\d\2\b\b\f\8\8\1\1\2\2\7\6\f\9\5\b\4\2\4\c\6\d\3 ]] 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:48.013 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:48.271 Validate MD5 checksum, iteration 2 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:48.271 22:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:48.271 [2024-12-16 22:25:54.419611] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:48.271 [2024-12-16 22:25:54.419855] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96315 ] 00:29:48.271 [2024-12-16 22:25:54.577484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.271 [2024-12-16 22:25:54.596616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.657  [2024-12-16T22:25:56.952Z] Copying: 535/1024 [MB] (535 MBps) [2024-12-16T22:25:57.212Z] Copying: 1024/1024 [MB] (average 539 MBps) 00:29:50.865 00:29:51.124 22:25:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:51.124 22:25:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=92b53af91831033fa4227f8a67d44446 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 92b53af91831033fa4227f8a67d44446 != \9\2\b\5\3\a\f\9\1\8\3\1\0\3\3\f\a\4\2\2\7\f\8\a\6\7\d\4\4\4\4\6 ]] 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:52.498 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96223 ]] 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96223 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96223 ']' 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96223 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96223 00:29:52.757 killing process with pid 96223 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96223' 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96223 00:29:52.757 22:25:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96223 00:29:52.757 [2024-12-16 22:25:59.063527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:52.757 [2024-12-16 22:25:59.067180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.757 [2024-12-16 22:25:59.067212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:52.758 [2024-12-16 22:25:59.067224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:52.758 [2024-12-16 22:25:59.067231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.067250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:52.758 [2024-12-16 22:25:59.067750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.067771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:52.758 [2024-12-16 22:25:59.067778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.490 ms 00:29:52.758 [2024-12-16 22:25:59.067785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.068030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.068040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:52.758 [2024-12-16 22:25:59.068048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:29:52.758 [2024-12-16 22:25:59.068055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.069469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.069491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:52.758 [2024-12-16 22:25:59.069499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:29:52.758 [2024-12-16 22:25:59.069508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.070364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.070380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:52.758 [2024-12-16 22:25:59.070388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:29:52.758 [2024-12-16 22:25:59.070395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.071895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.071922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:52.758 [2024-12-16 22:25:59.071934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.472 ms 00:29:52.758 [2024-12-16 22:25:59.071940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.073210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.073319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:52.758 [2024-12-16 22:25:59.073332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.241 ms 00:29:52.758 [2024-12-16 22:25:59.073339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.073422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.073431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:52.758 [2024-12-16 22:25:59.073438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:52.758 [2024-12-16 22:25:59.073449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.075030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.075056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:52.758 [2024-12-16 22:25:59.075063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.568 ms 00:29:52.758 [2024-12-16 22:25:59.075070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.076480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.076568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:52.758 [2024-12-16 22:25:59.076580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.385 ms 00:29:52.758 [2024-12-16 22:25:59.076586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.077781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.077802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:52.758 [2024-12-16 22:25:59.077809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:29:52.758 [2024-12-16 22:25:59.077815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.079356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.079380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:52.758 [2024-12-16 22:25:59.079387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.487 ms 00:29:52.758 [2024-12-16 22:25:59.079393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.079416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:52.758 [2024-12-16 22:25:59.079428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:52.758 [2024-12-16 22:25:59.079436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:52.758 [2024-12-16 22:25:59.079442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:52.758 [2024-12-16 22:25:59.079448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:52.758 [2024-12-16 22:25:59.079540] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:52.758 [2024-12-16 22:25:59.079547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 22c2e225-9ea6-4375-a243-b0b5e29a1b4f 00:29:52.758 [2024-12-16 22:25:59.079553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:52.758 [2024-12-16 22:25:59.079559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:52.758 [2024-12-16 22:25:59.079565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:52.758 [2024-12-16 22:25:59.079571] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:52.758 [2024-12-16 22:25:59.079576] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:52.758 [2024-12-16 22:25:59.079583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:52.758 [2024-12-16 22:25:59.079592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:52.758 [2024-12-16 22:25:59.079597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:52.758 [2024-12-16 22:25:59.079604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:52.758 [2024-12-16 22:25:59.079610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.079616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:52.758 [2024-12-16 22:25:59.079625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:29:52.758 [2024-12-16 22:25:59.079631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.081398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.081478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:52.758 [2024-12-16 22:25:59.081516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.752 ms 00:29:52.758 [2024-12-16 22:25:59.081534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.081632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.758 [2024-12-16 22:25:59.081651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:52.758 [2024-12-16 22:25:59.081667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:52.758 [2024-12-16 22:25:59.081682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.087497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.758 [2024-12-16 22:25:59.087583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:52.758 [2024-12-16 22:25:59.087621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.758 [2024-12-16 22:25:59.087642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.088237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.758 [2024-12-16 22:25:59.088315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:52.758 [2024-12-16 22:25:59.088367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.758 [2024-12-16 22:25:59.088384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.088455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.758 [2024-12-16 22:25:59.088483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:52.758 [2024-12-16 22:25:59.088506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.758 [2024-12-16 22:25:59.088546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.758 [2024-12-16 22:25:59.088576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.758 [2024-12-16 22:25:59.088592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:52.758 [2024-12-16 22:25:59.088600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.758 [2024-12-16 22:25:59.088606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.759 [2024-12-16 22:25:59.099624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:52.759 [2024-12-16 22:25:59.099724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:52.759 [2024-12-16 22:25:59.099763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:52.759 [2024-12-16 22:25:59.099785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:53.018 [2024-12-16 22:25:59.108199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:53.018 [2024-12-16 22:25:59.108355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:53.018 [2024-12-16 22:25:59.108455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:53.018 [2024-12-16 22:25:59.108657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:53.018 [2024-12-16 22:25:59.108753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.108896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.108914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:53.018 [2024-12-16 22:25:59.108929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.108944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.109035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:53.018 [2024-12-16 22:25:59.109059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:53.018 [2024-12-16 22:25:59.109075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:53.018 [2024-12-16 22:25:59.109091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:53.018 [2024-12-16 22:25:59.109207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 41.998 ms, result 0 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:53.585 Remove shared memory files 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96037 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:53.585 ************************************ 00:29:53.585 END TEST ftl_upgrade_shutdown 00:29:53.585 ************************************ 00:29:53.585 00:29:53.585 real 1m11.565s 00:29:53.585 user 1m35.016s 00:29:53.585 sys 0m20.382s 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.585 22:25:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:53.585 22:25:59 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:53.585 22:25:59 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:53.585 22:25:59 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:53.585 22:25:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.585 22:25:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:53.585 ************************************ 00:29:53.585 START TEST ftl_restore_fast 00:29:53.585 ************************************ 00:29:53.585 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:53.585 * Looking for test storage... 00:29:53.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.585 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:53.585 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:29:53.585 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.845 --rc genhtml_branch_coverage=1 00:29:53.845 --rc genhtml_function_coverage=1 00:29:53.845 --rc genhtml_legend=1 00:29:53.845 --rc geninfo_all_blocks=1 00:29:53.845 --rc geninfo_unexecuted_blocks=1 00:29:53.845 00:29:53.845 ' 00:29:53.845 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:53.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.845 --rc genhtml_branch_coverage=1 00:29:53.845 --rc genhtml_function_coverage=1 00:29:53.845 --rc genhtml_legend=1 00:29:53.845 --rc geninfo_all_blocks=1 00:29:53.846 --rc geninfo_unexecuted_blocks=1 00:29:53.846 00:29:53.846 ' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.846 --rc genhtml_branch_coverage=1 00:29:53.846 --rc genhtml_function_coverage=1 00:29:53.846 --rc genhtml_legend=1 00:29:53.846 --rc geninfo_all_blocks=1 00:29:53.846 --rc geninfo_unexecuted_blocks=1 00:29:53.846 00:29:53.846 ' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:53.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.846 --rc genhtml_branch_coverage=1 00:29:53.846 --rc genhtml_function_coverage=1 00:29:53.846 --rc genhtml_legend=1 00:29:53.846 --rc geninfo_all_blocks=1 00:29:53.846 --rc geninfo_unexecuted_blocks=1 00:29:53.846 00:29:53.846 ' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.846 22:25:59 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:53.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6TS86zVeUn 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96448 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96448 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96448 ']' 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.846 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:53.846 [2024-12-16 22:26:00.074187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:53.846 [2024-12-16 22:26:00.074306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96448 ] 00:29:54.105 [2024-12-16 22:26:00.228720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.105 [2024-12-16 22:26:00.254274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:54.670 22:26:00 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:54.928 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:55.185 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:55.185 { 00:29:55.185 "name": "nvme0n1", 00:29:55.185 "aliases": [ 00:29:55.185 "62f223bd-ec6c-4341-ad93-9bbdacab0764" 00:29:55.185 ], 00:29:55.185 "product_name": "NVMe disk", 00:29:55.185 "block_size": 4096, 00:29:55.185 "num_blocks": 1310720, 00:29:55.185 "uuid": "62f223bd-ec6c-4341-ad93-9bbdacab0764", 00:29:55.185 "numa_id": -1, 00:29:55.185 "assigned_rate_limits": { 00:29:55.185 "rw_ios_per_sec": 0, 00:29:55.185 "rw_mbytes_per_sec": 0, 00:29:55.185 "r_mbytes_per_sec": 0, 00:29:55.185 "w_mbytes_per_sec": 0 00:29:55.185 }, 00:29:55.185 "claimed": true, 00:29:55.185 "claim_type": "read_many_write_one", 00:29:55.185 "zoned": false, 00:29:55.185 "supported_io_types": { 00:29:55.185 "read": true, 00:29:55.185 "write": true, 00:29:55.185 "unmap": true, 00:29:55.185 "flush": true, 00:29:55.185 "reset": true, 00:29:55.185 "nvme_admin": true, 00:29:55.185 "nvme_io": true, 00:29:55.185 "nvme_io_md": false, 00:29:55.185 "write_zeroes": true, 00:29:55.185 "zcopy": false, 00:29:55.185 "get_zone_info": false, 00:29:55.185 "zone_management": false, 00:29:55.185 "zone_append": false, 00:29:55.185 "compare": true, 00:29:55.185 "compare_and_write": false, 00:29:55.185 "abort": true, 00:29:55.185 "seek_hole": false, 00:29:55.185 "seek_data": false, 00:29:55.185 "copy": true, 00:29:55.185 "nvme_iov_md": false 00:29:55.185 }, 00:29:55.185 "driver_specific": { 00:29:55.185 "nvme": [ 00:29:55.185 { 00:29:55.185 "pci_address": "0000:00:11.0", 00:29:55.185 "trid": { 00:29:55.185 "trtype": "PCIe", 00:29:55.185 "traddr": "0000:00:11.0" 00:29:55.185 }, 00:29:55.185 "ctrlr_data": { 00:29:55.185 "cntlid": 0, 00:29:55.186 "vendor_id": "0x1b36", 00:29:55.186 "model_number": "QEMU NVMe Ctrl", 00:29:55.186 "serial_number": "12341", 00:29:55.186 "firmware_revision": "8.0.0", 00:29:55.186 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:55.186 "oacs": { 00:29:55.186 "security": 0, 00:29:55.186 "format": 1, 00:29:55.186 "firmware": 0, 00:29:55.186 "ns_manage": 1 00:29:55.186 }, 00:29:55.186 "multi_ctrlr": false, 00:29:55.186 "ana_reporting": false 00:29:55.186 }, 00:29:55.186 "vs": { 00:29:55.186 "nvme_version": "1.4" 00:29:55.186 }, 00:29:55.186 "ns_data": { 00:29:55.186 "id": 1, 00:29:55.186 "can_share": false 00:29:55.186 } 00:29:55.186 } 00:29:55.186 ], 00:29:55.186 "mp_policy": "active_passive" 00:29:55.186 } 00:29:55.186 } 00:29:55.186 ]' 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:55.186 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:55.444 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=b014b0e5-8e80-4939-a559-fb04f301bbfa 00:29:55.444 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:55.444 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b014b0e5-8e80-4939-a559-fb04f301bbfa 00:29:55.702 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:55.702 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ef682119-1c64-4c9a-b476-0baa510cca34 00:29:55.702 22:26:01 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ef682119-1c64-4c9a-b476-0baa510cca34 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:55.959 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:56.218 { 00:29:56.218 "name": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.218 "aliases": [ 00:29:56.218 "lvs/nvme0n1p0" 00:29:56.218 ], 00:29:56.218 "product_name": "Logical Volume", 00:29:56.218 "block_size": 4096, 00:29:56.218 "num_blocks": 26476544, 00:29:56.218 "uuid": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.218 "assigned_rate_limits": { 00:29:56.218 "rw_ios_per_sec": 0, 00:29:56.218 "rw_mbytes_per_sec": 0, 00:29:56.218 "r_mbytes_per_sec": 0, 00:29:56.218 "w_mbytes_per_sec": 0 00:29:56.218 }, 00:29:56.218 "claimed": false, 00:29:56.218 "zoned": false, 00:29:56.218 "supported_io_types": { 00:29:56.218 "read": true, 00:29:56.218 "write": true, 00:29:56.218 "unmap": true, 00:29:56.218 "flush": false, 00:29:56.218 "reset": true, 00:29:56.218 "nvme_admin": false, 00:29:56.218 "nvme_io": false, 00:29:56.218 "nvme_io_md": false, 00:29:56.218 "write_zeroes": true, 00:29:56.218 "zcopy": false, 00:29:56.218 "get_zone_info": false, 00:29:56.218 "zone_management": false, 00:29:56.218 "zone_append": false, 00:29:56.218 "compare": false, 00:29:56.218 "compare_and_write": false, 00:29:56.218 "abort": false, 00:29:56.218 "seek_hole": true, 00:29:56.218 "seek_data": true, 00:29:56.218 "copy": false, 00:29:56.218 "nvme_iov_md": false 00:29:56.218 }, 00:29:56.218 "driver_specific": { 00:29:56.218 "lvol": { 00:29:56.218 "lvol_store_uuid": "ef682119-1c64-4c9a-b476-0baa510cca34", 00:29:56.218 "base_bdev": "nvme0n1", 00:29:56.218 "thin_provision": true, 00:29:56.218 "num_allocated_clusters": 0, 00:29:56.218 "snapshot": false, 00:29:56.218 "clone": false, 00:29:56.218 "esnap_clone": false 00:29:56.218 } 00:29:56.218 } 00:29:56.218 } 00:29:56.218 ]' 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:56.218 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:56.476 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.734 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:56.734 { 00:29:56.734 "name": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.734 "aliases": [ 00:29:56.734 "lvs/nvme0n1p0" 00:29:56.734 ], 00:29:56.734 "product_name": "Logical Volume", 00:29:56.734 "block_size": 4096, 00:29:56.734 "num_blocks": 26476544, 00:29:56.734 "uuid": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.734 "assigned_rate_limits": { 00:29:56.734 "rw_ios_per_sec": 0, 00:29:56.734 "rw_mbytes_per_sec": 0, 00:29:56.734 "r_mbytes_per_sec": 0, 00:29:56.734 "w_mbytes_per_sec": 0 00:29:56.734 }, 00:29:56.734 "claimed": false, 00:29:56.734 "zoned": false, 00:29:56.734 "supported_io_types": { 00:29:56.734 "read": true, 00:29:56.734 "write": true, 00:29:56.734 "unmap": true, 00:29:56.734 "flush": false, 00:29:56.734 "reset": true, 00:29:56.734 "nvme_admin": false, 00:29:56.734 "nvme_io": false, 00:29:56.734 "nvme_io_md": false, 00:29:56.734 "write_zeroes": true, 00:29:56.734 "zcopy": false, 00:29:56.734 "get_zone_info": false, 00:29:56.734 "zone_management": false, 00:29:56.734 "zone_append": false, 00:29:56.734 "compare": false, 00:29:56.734 "compare_and_write": false, 00:29:56.734 "abort": false, 00:29:56.734 "seek_hole": true, 00:29:56.734 "seek_data": true, 00:29:56.734 "copy": false, 00:29:56.734 "nvme_iov_md": false 00:29:56.734 }, 00:29:56.734 "driver_specific": { 00:29:56.734 "lvol": { 00:29:56.735 "lvol_store_uuid": "ef682119-1c64-4c9a-b476-0baa510cca34", 00:29:56.735 "base_bdev": "nvme0n1", 00:29:56.735 "thin_provision": true, 00:29:56.735 "num_allocated_clusters": 0, 00:29:56.735 "snapshot": false, 00:29:56.735 "clone": false, 00:29:56.735 "esnap_clone": false 00:29:56.735 } 00:29:56.735 } 00:29:56.735 } 00:29:56.735 ]' 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:56.735 22:26:02 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d2d41408-8e93-4528-b9ce-5982f2a1198b 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:56.993 { 00:29:56.993 "name": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.993 "aliases": [ 00:29:56.993 "lvs/nvme0n1p0" 00:29:56.993 ], 00:29:56.993 "product_name": "Logical Volume", 00:29:56.993 "block_size": 4096, 00:29:56.993 "num_blocks": 26476544, 00:29:56.993 "uuid": "d2d41408-8e93-4528-b9ce-5982f2a1198b", 00:29:56.993 "assigned_rate_limits": { 00:29:56.993 "rw_ios_per_sec": 0, 00:29:56.993 "rw_mbytes_per_sec": 0, 00:29:56.993 "r_mbytes_per_sec": 0, 00:29:56.993 "w_mbytes_per_sec": 0 00:29:56.993 }, 00:29:56.993 "claimed": false, 00:29:56.993 "zoned": false, 00:29:56.993 "supported_io_types": { 00:29:56.993 "read": true, 00:29:56.993 "write": true, 00:29:56.993 "unmap": true, 00:29:56.993 "flush": false, 00:29:56.993 "reset": true, 00:29:56.993 "nvme_admin": false, 00:29:56.993 "nvme_io": false, 00:29:56.993 "nvme_io_md": false, 00:29:56.993 "write_zeroes": true, 00:29:56.993 "zcopy": false, 00:29:56.993 "get_zone_info": false, 00:29:56.993 "zone_management": false, 00:29:56.993 "zone_append": false, 00:29:56.993 "compare": false, 00:29:56.993 "compare_and_write": false, 00:29:56.993 "abort": false, 00:29:56.993 "seek_hole": true, 00:29:56.993 "seek_data": true, 00:29:56.993 "copy": false, 00:29:56.993 "nvme_iov_md": false 00:29:56.993 }, 00:29:56.993 "driver_specific": { 00:29:56.993 "lvol": { 00:29:56.993 "lvol_store_uuid": "ef682119-1c64-4c9a-b476-0baa510cca34", 00:29:56.993 "base_bdev": "nvme0n1", 00:29:56.993 "thin_provision": true, 00:29:56.993 "num_allocated_clusters": 0, 00:29:56.993 "snapshot": false, 00:29:56.993 "clone": false, 00:29:56.993 "esnap_clone": false 00:29:56.993 } 00:29:56.993 } 00:29:56.993 } 00:29:56.993 ]' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d2d41408-8e93-4528-b9ce-5982f2a1198b --l2p_dram_limit 10' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:56.993 22:26:03 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d2d41408-8e93-4528-b9ce-5982f2a1198b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:57.252 [2024-12-16 22:26:03.518992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.519041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:57.252 [2024-12-16 22:26:03.519052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:57.252 [2024-12-16 22:26:03.519061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.519099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.519109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.252 [2024-12-16 22:26:03.519117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:57.252 [2024-12-16 22:26:03.519127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.519145] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:57.252 [2024-12-16 22:26:03.519331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:57.252 [2024-12-16 22:26:03.519346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.519355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.252 [2024-12-16 22:26:03.519362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:29:57.252 [2024-12-16 22:26:03.519370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.519391] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:29:57.252 [2024-12-16 22:26:03.520687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.520713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:57.252 [2024-12-16 22:26:03.520722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:57.252 [2024-12-16 22:26:03.520729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.527660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.527785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.252 [2024-12-16 22:26:03.527800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:29:57.252 [2024-12-16 22:26:03.527807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.527917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.527926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.252 [2024-12-16 22:26:03.527935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:57.252 [2024-12-16 22:26:03.527940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.527982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.527990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:57.252 [2024-12-16 22:26:03.528002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:57.252 [2024-12-16 22:26:03.528008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.528026] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:57.252 [2024-12-16 22:26:03.529647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.529673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.252 [2024-12-16 22:26:03.529680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:29:57.252 [2024-12-16 22:26:03.529688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.529719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.529727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:57.252 [2024-12-16 22:26:03.529734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:57.252 [2024-12-16 22:26:03.529743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.252 [2024-12-16 22:26:03.529756] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:57.252 [2024-12-16 22:26:03.529894] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:57.252 [2024-12-16 22:26:03.529904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:57.252 [2024-12-16 22:26:03.529916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:57.252 [2024-12-16 22:26:03.529924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:57.252 [2024-12-16 22:26:03.529936] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:57.252 [2024-12-16 22:26:03.529945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:57.252 [2024-12-16 22:26:03.529954] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:57.252 [2024-12-16 22:26:03.529961] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:57.252 [2024-12-16 22:26:03.529968] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:57.252 [2024-12-16 22:26:03.529974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.252 [2024-12-16 22:26:03.529982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:57.252 [2024-12-16 22:26:03.529989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:29:57.253 [2024-12-16 22:26:03.529996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.253 [2024-12-16 22:26:03.530061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.253 [2024-12-16 22:26:03.530071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:57.253 [2024-12-16 22:26:03.530077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:57.253 [2024-12-16 22:26:03.530086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.253 [2024-12-16 22:26:03.530158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:57.253 [2024-12-16 22:26:03.530167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:57.253 [2024-12-16 22:26:03.530173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:57.253 [2024-12-16 22:26:03.530196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:57.253 [2024-12-16 22:26:03.530213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.253 [2024-12-16 22:26:03.530225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:57.253 [2024-12-16 22:26:03.530233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:57.253 [2024-12-16 22:26:03.530238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.253 [2024-12-16 22:26:03.530246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:57.253 [2024-12-16 22:26:03.530252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:57.253 [2024-12-16 22:26:03.530259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:57.253 [2024-12-16 22:26:03.530271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:57.253 [2024-12-16 22:26:03.530289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:57.253 [2024-12-16 22:26:03.530310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:57.253 [2024-12-16 22:26:03.530328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:57.253 [2024-12-16 22:26:03.530352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:57.253 [2024-12-16 22:26:03.530370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.253 [2024-12-16 22:26:03.530384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:57.253 [2024-12-16 22:26:03.530391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:57.253 [2024-12-16 22:26:03.530397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.253 [2024-12-16 22:26:03.530404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:57.253 [2024-12-16 22:26:03.530411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:57.253 [2024-12-16 22:26:03.530417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:57.253 [2024-12-16 22:26:03.530430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:57.253 [2024-12-16 22:26:03.530437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530443] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:57.253 [2024-12-16 22:26:03.530450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:57.253 [2024-12-16 22:26:03.530460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.253 [2024-12-16 22:26:03.530475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:57.253 [2024-12-16 22:26:03.530481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:57.253 [2024-12-16 22:26:03.530489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:57.253 [2024-12-16 22:26:03.530495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:57.253 [2024-12-16 22:26:03.530502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:57.253 [2024-12-16 22:26:03.530509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:57.253 [2024-12-16 22:26:03.530518] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:57.253 [2024-12-16 22:26:03.530528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:57.253 [2024-12-16 22:26:03.530545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:57.253 [2024-12-16 22:26:03.530552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:57.253 [2024-12-16 22:26:03.530558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:57.253 [2024-12-16 22:26:03.530566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:57.253 [2024-12-16 22:26:03.530573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:57.253 [2024-12-16 22:26:03.530590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:57.253 [2024-12-16 22:26:03.530597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:57.253 [2024-12-16 22:26:03.530605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:57.253 [2024-12-16 22:26:03.530611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:57.253 [2024-12-16 22:26:03.530649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:57.253 [2024-12-16 22:26:03.530656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.253 [2024-12-16 22:26:03.530670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:57.253 [2024-12-16 22:26:03.530678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:57.253 [2024-12-16 22:26:03.530685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:57.253 [2024-12-16 22:26:03.530693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.253 [2024-12-16 22:26:03.530698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:57.253 [2024-12-16 22:26:03.530707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:29:57.253 [2024-12-16 22:26:03.530713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.253 [2024-12-16 22:26:03.530744] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:57.253 [2024-12-16 22:26:03.530752] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:01.449 [2024-12-16 22:26:06.985458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:06.985682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:01.449 [2024-12-16 22:26:06.985742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3454.699 ms 00:30:01.449 [2024-12-16 22:26:06.985763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:06.996328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:06.996463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:01.449 [2024-12-16 22:26:06.996515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.459 ms 00:30:01.449 [2024-12-16 22:26:06.996534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:06.996642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:06.996662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:01.449 [2024-12-16 22:26:06.996823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:01.449 [2024-12-16 22:26:06.996870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.006698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.006815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:01.449 [2024-12-16 22:26:07.007019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.777 ms 00:30:01.449 [2024-12-16 22:26:07.007046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.007081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.007099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:01.449 [2024-12-16 22:26:07.007120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:01.449 [2024-12-16 22:26:07.007136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.007590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.007672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:01.449 [2024-12-16 22:26:07.007720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:30:01.449 [2024-12-16 22:26:07.007729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.007820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.007831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:01.449 [2024-12-16 22:26:07.007852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:01.449 [2024-12-16 22:26:07.007859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.014279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.014377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:01.449 [2024-12-16 22:26:07.014392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:30:01.449 [2024-12-16 22:26:07.014399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.449 [2024-12-16 22:26:07.033421] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:01.449 [2024-12-16 22:26:07.036885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.449 [2024-12-16 22:26:07.036922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:01.449 [2024-12-16 22:26:07.036933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.433 ms 00:30:01.449 [2024-12-16 22:26:07.036943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.113342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.113377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:01.450 [2024-12-16 22:26:07.113390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.363 ms 00:30:01.450 [2024-12-16 22:26:07.113400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.113549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.113560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:01.450 [2024-12-16 22:26:07.113567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:30:01.450 [2024-12-16 22:26:07.113575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.117453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.117491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:01.450 [2024-12-16 22:26:07.117502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:30:01.450 [2024-12-16 22:26:07.117515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.120497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.120613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:01.450 [2024-12-16 22:26:07.120626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:30:01.450 [2024-12-16 22:26:07.120634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.120888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.120899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:01.450 [2024-12-16 22:26:07.120906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:01.450 [2024-12-16 22:26:07.120916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.149049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.149080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:01.450 [2024-12-16 22:26:07.149095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.118 ms 00:30:01.450 [2024-12-16 22:26:07.149103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.154041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.154145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:01.450 [2024-12-16 22:26:07.154157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:30:01.450 [2024-12-16 22:26:07.154166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.157722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.157816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:01.450 [2024-12-16 22:26:07.157828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.530 ms 00:30:01.450 [2024-12-16 22:26:07.157846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.161763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.161793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:01.450 [2024-12-16 22:26:07.161800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:30:01.450 [2024-12-16 22:26:07.161810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.161848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.161866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:01.450 [2024-12-16 22:26:07.161873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:01.450 [2024-12-16 22:26:07.161881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.161937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.161947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:01.450 [2024-12-16 22:26:07.161954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:01.450 [2024-12-16 22:26:07.161964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.163164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3643.801 ms, result 0 00:30:01.450 { 00:30:01.450 "name": "ftl0", 00:30:01.450 "uuid": "1b65a023-bcff-4196-b4ae-6df41c46d80d" 00:30:01.450 } 00:30:01.450 22:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:01.450 22:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:01.450 22:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:01.450 22:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:01.450 [2024-12-16 22:26:07.567702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.567733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:01.450 [2024-12-16 22:26:07.567746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:01.450 [2024-12-16 22:26:07.567752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.567774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:01.450 [2024-12-16 22:26:07.568337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.568363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:01.450 [2024-12-16 22:26:07.568371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:30:01.450 [2024-12-16 22:26:07.568379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.568581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.568593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:01.450 [2024-12-16 22:26:07.568603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:30:01.450 [2024-12-16 22:26:07.568611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.571046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.571148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:01.450 [2024-12-16 22:26:07.571159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:30:01.450 [2024-12-16 22:26:07.571167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.575729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.575753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:01.450 [2024-12-16 22:26:07.575761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:30:01.450 [2024-12-16 22:26:07.575771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.577622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.577652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:01.450 [2024-12-16 22:26:07.577660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:30:01.450 [2024-12-16 22:26:07.577666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.582627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.582659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:01.450 [2024-12-16 22:26:07.582667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:30:01.450 [2024-12-16 22:26:07.582675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.582768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.582778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:01.450 [2024-12-16 22:26:07.582786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:01.450 [2024-12-16 22:26:07.582794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.585265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.585361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:01.450 [2024-12-16 22:26:07.585372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.457 ms 00:30:01.450 [2024-12-16 22:26:07.585379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.587351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.587381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:01.450 [2024-12-16 22:26:07.587388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:30:01.450 [2024-12-16 22:26:07.587396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.589091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.589117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:01.450 [2024-12-16 22:26:07.589124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:30:01.450 [2024-12-16 22:26:07.589132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.590728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.450 [2024-12-16 22:26:07.590757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:01.450 [2024-12-16 22:26:07.590763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:30:01.450 [2024-12-16 22:26:07.590772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.450 [2024-12-16 22:26:07.590795] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:01.450 [2024-12-16 22:26:07.590808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:01.450 [2024-12-16 22:26:07.590903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.590998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:01.451 [2024-12-16 22:26:07.591528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:01.451 [2024-12-16 22:26:07.591534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:30:01.452 [2024-12-16 22:26:07.591543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:01.452 [2024-12-16 22:26:07.591548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:01.452 [2024-12-16 22:26:07.591555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:01.452 [2024-12-16 22:26:07.591561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:01.452 [2024-12-16 22:26:07.591568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:01.452 [2024-12-16 22:26:07.591576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:01.452 [2024-12-16 22:26:07.591583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:01.452 [2024-12-16 22:26:07.591588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:01.452 [2024-12-16 22:26:07.591594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:01.452 [2024-12-16 22:26:07.591600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.452 [2024-12-16 22:26:07.591607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:01.452 [2024-12-16 22:26:07.591615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:30:01.452 [2024-12-16 22:26:07.591623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.593202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.452 [2024-12-16 22:26:07.593225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:01.452 [2024-12-16 22:26:07.593232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:30:01.452 [2024-12-16 22:26:07.593241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.593305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:01.452 [2024-12-16 22:26:07.593314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:01.452 [2024-12-16 22:26:07.593324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:01.452 [2024-12-16 22:26:07.593332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.599287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.599316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:01.452 [2024-12-16 22:26:07.599325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.599333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.599379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.599387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:01.452 [2024-12-16 22:26:07.599393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.599400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.599451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.599463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:01.452 [2024-12-16 22:26:07.599470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.599480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.599493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.599501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:01.452 [2024-12-16 22:26:07.599507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.599515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.610607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.610645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:01.452 [2024-12-16 22:26:07.610655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.610664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.619501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.619544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:01.452 [2024-12-16 22:26:07.619552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.619560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.619627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.619640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:01.452 [2024-12-16 22:26:07.619647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.619655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.619686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.619694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:01.452 [2024-12-16 22:26:07.619701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.619709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.619766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.619775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:01.452 [2024-12-16 22:26:07.619781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.619789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.619815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.619826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:01.452 [2024-12-16 22:26:07.619832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.620074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.620134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.620156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:01.452 [2024-12-16 22:26:07.620173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.620190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.620243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:01.452 [2024-12-16 22:26:07.620265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:01.452 [2024-12-16 22:26:07.620279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:01.452 [2024-12-16 22:26:07.620296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:01.452 [2024-12-16 22:26:07.620434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.683 ms, result 0 00:30:01.452 true 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96448 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96448 ']' 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96448 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96448 00:30:01.452 killing process with pid 96448 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96448' 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96448 00:30:01.452 22:26:07 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96448 00:30:05.659 22:26:11 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:10.997 262144+0 records in 00:30:10.997 262144+0 records out 00:30:10.997 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.37728 s, 245 MB/s 00:30:10.997 22:26:16 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:12.376 22:26:18 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:12.376 [2024-12-16 22:26:18.351184] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:12.376 [2024-12-16 22:26:18.351396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96657 ] 00:30:12.376 [2024-12-16 22:26:18.504956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.376 [2024-12-16 22:26:18.526621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.376 [2024-12-16 22:26:18.629431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:12.376 [2024-12-16 22:26:18.629513] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:12.637 [2024-12-16 22:26:18.790616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.637 [2024-12-16 22:26:18.790677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:12.638 [2024-12-16 22:26:18.790692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:12.638 [2024-12-16 22:26:18.790701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.790767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.790779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:12.638 [2024-12-16 22:26:18.790791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:12.638 [2024-12-16 22:26:18.790799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.790829] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:12.638 [2024-12-16 22:26:18.791151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:12.638 [2024-12-16 22:26:18.791171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.791182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:12.638 [2024-12-16 22:26:18.791194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:30:12.638 [2024-12-16 22:26:18.791202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.792908] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:12.638 [2024-12-16 22:26:18.796740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.796794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:12.638 [2024-12-16 22:26:18.796813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.834 ms 00:30:12.638 [2024-12-16 22:26:18.796825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.796919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.796933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:12.638 [2024-12-16 22:26:18.796942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:12.638 [2024-12-16 22:26:18.796950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.805163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.805210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:12.638 [2024-12-16 22:26:18.805231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.166 ms 00:30:12.638 [2024-12-16 22:26:18.805239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.805345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.805356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:12.638 [2024-12-16 22:26:18.805365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:30:12.638 [2024-12-16 22:26:18.805373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.805431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.805442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:12.638 [2024-12-16 22:26:18.805455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:12.638 [2024-12-16 22:26:18.805466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.805490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:12.638 [2024-12-16 22:26:18.807686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.807731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:12.638 [2024-12-16 22:26:18.807751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:30:12.638 [2024-12-16 22:26:18.807759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.807798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.807807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:12.638 [2024-12-16 22:26:18.807816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:12.638 [2024-12-16 22:26:18.807826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.807873] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:12.638 [2024-12-16 22:26:18.807902] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:12.638 [2024-12-16 22:26:18.807946] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:12.638 [2024-12-16 22:26:18.807962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:12.638 [2024-12-16 22:26:18.808067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:12.638 [2024-12-16 22:26:18.808078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:12.638 [2024-12-16 22:26:18.808092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:12.638 [2024-12-16 22:26:18.808102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808121] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:12.638 [2024-12-16 22:26:18.808129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:12.638 [2024-12-16 22:26:18.808137] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:12.638 [2024-12-16 22:26:18.808145] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:12.638 [2024-12-16 22:26:18.808153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.808165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:12.638 [2024-12-16 22:26:18.808173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:30:12.638 [2024-12-16 22:26:18.808183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.808268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.638 [2024-12-16 22:26:18.808277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:12.638 [2024-12-16 22:26:18.808285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:12.638 [2024-12-16 22:26:18.808292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.638 [2024-12-16 22:26:18.808390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:12.638 [2024-12-16 22:26:18.808405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:12.638 [2024-12-16 22:26:18.808414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:12.638 [2024-12-16 22:26:18.808452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:12.638 [2024-12-16 22:26:18.808476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.638 [2024-12-16 22:26:18.808492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:12.638 [2024-12-16 22:26:18.808502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:12.638 [2024-12-16 22:26:18.808510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.638 [2024-12-16 22:26:18.808518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:12.638 [2024-12-16 22:26:18.808526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:12.638 [2024-12-16 22:26:18.808536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:12.638 [2024-12-16 22:26:18.808553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:12.638 [2024-12-16 22:26:18.808578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:12.638 [2024-12-16 22:26:18.808604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:12.638 [2024-12-16 22:26:18.808627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:12.638 [2024-12-16 22:26:18.808655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.638 [2024-12-16 22:26:18.808670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:12.638 [2024-12-16 22:26:18.808677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.638 [2024-12-16 22:26:18.808695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:12.638 [2024-12-16 22:26:18.808702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:12.638 [2024-12-16 22:26:18.808709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.638 [2024-12-16 22:26:18.808718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:12.638 [2024-12-16 22:26:18.808725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:12.638 [2024-12-16 22:26:18.808733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:12.638 [2024-12-16 22:26:18.808749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:12.638 [2024-12-16 22:26:18.808757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.638 [2024-12-16 22:26:18.808767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:12.639 [2024-12-16 22:26:18.808778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:12.639 [2024-12-16 22:26:18.808787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.639 [2024-12-16 22:26:18.808795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.639 [2024-12-16 22:26:18.808807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:12.639 [2024-12-16 22:26:18.808817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:12.639 [2024-12-16 22:26:18.808825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:12.639 [2024-12-16 22:26:18.808833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:12.639 [2024-12-16 22:26:18.808856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:12.639 [2024-12-16 22:26:18.808863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:12.639 [2024-12-16 22:26:18.808871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:12.639 [2024-12-16 22:26:18.808881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.808891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:12.639 [2024-12-16 22:26:18.808899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:12.639 [2024-12-16 22:26:18.808906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:12.639 [2024-12-16 22:26:18.808913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:12.639 [2024-12-16 22:26:18.808924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:12.639 [2024-12-16 22:26:18.808931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:12.639 [2024-12-16 22:26:18.808938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:12.639 [2024-12-16 22:26:18.808945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:12.639 [2024-12-16 22:26:18.808952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:12.639 [2024-12-16 22:26:18.808959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.808967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.808974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.808981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.808989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:12.639 [2024-12-16 22:26:18.808996] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:12.639 [2024-12-16 22:26:18.809005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.809013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:12.639 [2024-12-16 22:26:18.809021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:12.639 [2024-12-16 22:26:18.809028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:12.639 [2024-12-16 22:26:18.809036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:12.639 [2024-12-16 22:26:18.809045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.809054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:12.639 [2024-12-16 22:26:18.809061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:30:12.639 [2024-12-16 22:26:18.809072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.822458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.822512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:12.639 [2024-12-16 22:26:18.822524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.333 ms 00:30:12.639 [2024-12-16 22:26:18.822532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.822642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.822652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:12.639 [2024-12-16 22:26:18.822661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:30:12.639 [2024-12-16 22:26:18.822669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.844077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.844131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:12.639 [2024-12-16 22:26:18.844146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.349 ms 00:30:12.639 [2024-12-16 22:26:18.844155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.844203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.844214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:12.639 [2024-12-16 22:26:18.844224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:12.639 [2024-12-16 22:26:18.844233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.844765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.844798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:12.639 [2024-12-16 22:26:18.844810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:30:12.639 [2024-12-16 22:26:18.844820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.845011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.845093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:12.639 [2024-12-16 22:26:18.845108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:30:12.639 [2024-12-16 22:26:18.845117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.852418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.852589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:12.639 [2024-12-16 22:26:18.852606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.275 ms 00:30:12.639 [2024-12-16 22:26:18.852615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.856353] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:12.639 [2024-12-16 22:26:18.856406] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:12.639 [2024-12-16 22:26:18.856418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.856426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:12.639 [2024-12-16 22:26:18.856436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:30:12.639 [2024-12-16 22:26:18.856442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.872099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.872154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:12.639 [2024-12-16 22:26:18.872166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.597 ms 00:30:12.639 [2024-12-16 22:26:18.872178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.875134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.875309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:12.639 [2024-12-16 22:26:18.875328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:30:12.639 [2024-12-16 22:26:18.875336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.878194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.878244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:12.639 [2024-12-16 22:26:18.878256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:30:12.639 [2024-12-16 22:26:18.878263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.878633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.878648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:12.639 [2024-12-16 22:26:18.878658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:30:12.639 [2024-12-16 22:26:18.878666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.902297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.902366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:12.639 [2024-12-16 22:26:18.902379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.614 ms 00:30:12.639 [2024-12-16 22:26:18.902388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.910453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:12.639 [2024-12-16 22:26:18.913575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.913747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:12.639 [2024-12-16 22:26:18.913766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.137 ms 00:30:12.639 [2024-12-16 22:26:18.913781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.913879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.913891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:12.639 [2024-12-16 22:26:18.913901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:12.639 [2024-12-16 22:26:18.913910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.913979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.913989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:12.639 [2024-12-16 22:26:18.914001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:12.639 [2024-12-16 22:26:18.914010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.639 [2024-12-16 22:26:18.914030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.639 [2024-12-16 22:26:18.914039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:12.639 [2024-12-16 22:26:18.914048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:12.639 [2024-12-16 22:26:18.914055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.640 [2024-12-16 22:26:18.914093] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:12.640 [2024-12-16 22:26:18.914109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.640 [2024-12-16 22:26:18.914117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:12.640 [2024-12-16 22:26:18.914125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:12.640 [2024-12-16 22:26:18.914136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.640 [2024-12-16 22:26:18.919412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.640 [2024-12-16 22:26:18.919460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:12.640 [2024-12-16 22:26:18.919471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.257 ms 00:30:12.640 [2024-12-16 22:26:18.919479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.640 [2024-12-16 22:26:18.919558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.640 [2024-12-16 22:26:18.919569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:12.640 [2024-12-16 22:26:18.919577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:12.640 [2024-12-16 22:26:18.919593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.640 [2024-12-16 22:26:18.920753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.675 ms, result 0 00:30:14.025  [2024-12-16T22:26:20.945Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-16T22:26:22.330Z] Copying: 47/1024 [MB] (32 MBps) [2024-12-16T22:26:23.273Z] Copying: 98/1024 [MB] (50 MBps) [2024-12-16T22:26:24.215Z] Copying: 127/1024 [MB] (29 MBps) [2024-12-16T22:26:25.153Z] Copying: 170/1024 [MB] (42 MBps) [2024-12-16T22:26:26.094Z] Copying: 181/1024 [MB] (11 MBps) [2024-12-16T22:26:27.036Z] Copying: 192/1024 [MB] (10 MBps) [2024-12-16T22:26:27.980Z] Copying: 202/1024 [MB] (10 MBps) [2024-12-16T22:26:29.367Z] Copying: 212/1024 [MB] (10 MBps) [2024-12-16T22:26:29.939Z] Copying: 223/1024 [MB] (10 MBps) [2024-12-16T22:26:31.325Z] Copying: 261/1024 [MB] (37 MBps) [2024-12-16T22:26:32.269Z] Copying: 285/1024 [MB] (23 MBps) [2024-12-16T22:26:33.212Z] Copying: 305/1024 [MB] (19 MBps) [2024-12-16T22:26:34.156Z] Copying: 327/1024 [MB] (22 MBps) [2024-12-16T22:26:35.099Z] Copying: 348/1024 [MB] (21 MBps) [2024-12-16T22:26:36.043Z] Copying: 367/1024 [MB] (18 MBps) [2024-12-16T22:26:36.988Z] Copying: 383/1024 [MB] (16 MBps) [2024-12-16T22:26:37.964Z] Copying: 396/1024 [MB] (13 MBps) [2024-12-16T22:26:39.356Z] Copying: 415/1024 [MB] (18 MBps) [2024-12-16T22:26:40.300Z] Copying: 437/1024 [MB] (21 MBps) [2024-12-16T22:26:41.245Z] Copying: 453/1024 [MB] (16 MBps) [2024-12-16T22:26:42.191Z] Copying: 477/1024 [MB] (24 MBps) [2024-12-16T22:26:43.137Z] Copying: 494/1024 [MB] (16 MBps) [2024-12-16T22:26:44.082Z] Copying: 512/1024 [MB] (17 MBps) [2024-12-16T22:26:45.028Z] Copying: 531/1024 [MB] (18 MBps) [2024-12-16T22:26:45.970Z] Copying: 545/1024 [MB] (14 MBps) [2024-12-16T22:26:47.358Z] Copying: 560/1024 [MB] (14 MBps) [2024-12-16T22:26:47.931Z] Copying: 577/1024 [MB] (17 MBps) [2024-12-16T22:26:49.320Z] Copying: 591/1024 [MB] (14 MBps) [2024-12-16T22:26:50.272Z] Copying: 609/1024 [MB] (18 MBps) [2024-12-16T22:26:51.216Z] Copying: 628/1024 [MB] (19 MBps) [2024-12-16T22:26:52.171Z] Copying: 639/1024 [MB] (10 MBps) [2024-12-16T22:26:53.117Z] Copying: 652/1024 [MB] (13 MBps) [2024-12-16T22:26:54.061Z] Copying: 664/1024 [MB] (11 MBps) [2024-12-16T22:26:55.007Z] Copying: 676/1024 [MB] (11 MBps) [2024-12-16T22:26:55.951Z] Copying: 693/1024 [MB] (17 MBps) [2024-12-16T22:26:57.339Z] Copying: 710/1024 [MB] (16 MBps) [2024-12-16T22:26:58.283Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-16T22:26:59.227Z] Copying: 744/1024 [MB] (17 MBps) [2024-12-16T22:27:00.171Z] Copying: 760/1024 [MB] (15 MBps) [2024-12-16T22:27:01.142Z] Copying: 773/1024 [MB] (13 MBps) [2024-12-16T22:27:02.123Z] Copying: 793/1024 [MB] (19 MBps) [2024-12-16T22:27:03.063Z] Copying: 810/1024 [MB] (17 MBps) [2024-12-16T22:27:04.004Z] Copying: 826/1024 [MB] (15 MBps) [2024-12-16T22:27:04.945Z] Copying: 843/1024 [MB] (16 MBps) [2024-12-16T22:27:06.327Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-16T22:27:07.270Z] Copying: 868/1024 [MB] (14 MBps) [2024-12-16T22:27:08.212Z] Copying: 879/1024 [MB] (10 MBps) [2024-12-16T22:27:09.152Z] Copying: 891/1024 [MB] (12 MBps) [2024-12-16T22:27:10.092Z] Copying: 907/1024 [MB] (15 MBps) [2024-12-16T22:27:11.035Z] Copying: 923/1024 [MB] (16 MBps) [2024-12-16T22:27:11.977Z] Copying: 942/1024 [MB] (18 MBps) [2024-12-16T22:27:13.359Z] Copying: 958/1024 [MB] (15 MBps) [2024-12-16T22:27:14.292Z] Copying: 977/1024 [MB] (19 MBps) [2024-12-16T22:27:15.238Z] Copying: 995/1024 [MB] (18 MBps) [2024-12-16T22:27:15.501Z] Copying: 1018/1024 [MB] (22 MBps) [2024-12-16T22:27:15.501Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-16 22:27:15.253867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.154 [2024-12-16 22:27:15.253950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:09.154 [2024-12-16 22:27:15.253969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.154 [2024-12-16 22:27:15.253984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.154 [2024-12-16 22:27:15.254009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:09.154 [2024-12-16 22:27:15.255039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.154 [2024-12-16 22:27:15.255073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:09.154 [2024-12-16 22:27:15.255087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:31:09.154 [2024-12-16 22:27:15.255096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.154 [2024-12-16 22:27:15.258324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.154 [2024-12-16 22:27:15.258378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:09.154 [2024-12-16 22:27:15.258389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:31:09.154 [2024-12-16 22:27:15.258398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.154 [2024-12-16 22:27:15.258434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.154 [2024-12-16 22:27:15.258444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:09.154 [2024-12-16 22:27:15.258453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:09.154 [2024-12-16 22:27:15.258461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.154 [2024-12-16 22:27:15.258534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.154 [2024-12-16 22:27:15.258554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:09.154 [2024-12-16 22:27:15.258564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:09.154 [2024-12-16 22:27:15.258572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.154 [2024-12-16 22:27:15.258586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:09.154 [2024-12-16 22:27:15.258602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.258791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:09.154 [2024-12-16 22:27:15.259585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:09.155 [2024-12-16 22:27:15.259871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:09.155 [2024-12-16 22:27:15.259879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:31:09.155 [2024-12-16 22:27:15.259888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:09.155 [2024-12-16 22:27:15.259896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:09.155 [2024-12-16 22:27:15.259907] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:09.155 [2024-12-16 22:27:15.259917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:09.155 [2024-12-16 22:27:15.259926] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:09.155 [2024-12-16 22:27:15.259936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:09.155 [2024-12-16 22:27:15.259947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:09.155 [2024-12-16 22:27:15.259954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:09.155 [2024-12-16 22:27:15.259962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:09.155 [2024-12-16 22:27:15.259969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.155 [2024-12-16 22:27:15.259977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:09.155 [2024-12-16 22:27:15.259992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:31:09.155 [2024-12-16 22:27:15.260000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.263251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.155 [2024-12-16 22:27:15.263292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:09.155 [2024-12-16 22:27:15.263304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:31:09.155 [2024-12-16 22:27:15.263322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.263493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.155 [2024-12-16 22:27:15.263508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:09.155 [2024-12-16 22:27:15.263518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:31:09.155 [2024-12-16 22:27:15.263525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.274120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.274311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.155 [2024-12-16 22:27:15.274371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.274395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.274474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.274503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.155 [2024-12-16 22:27:15.274524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.274550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.274626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.274908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.155 [2024-12-16 22:27:15.274936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.274956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.275044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.275074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.155 [2024-12-16 22:27:15.275111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.275142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.294453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.294676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.155 [2024-12-16 22:27:15.294736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.294761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.309270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.309481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.155 [2024-12-16 22:27:15.309548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.309572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.309684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.309713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.155 [2024-12-16 22:27:15.309736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.309757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.309808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.309832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.155 [2024-12-16 22:27:15.309874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.309960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.310050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.310094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.155 [2024-12-16 22:27:15.310116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.310136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.310183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.310206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:09.155 [2024-12-16 22:27:15.310227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.310246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.310309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.310380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.155 [2024-12-16 22:27:15.310404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.310423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.310483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:09.155 [2024-12-16 22:27:15.310494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.155 [2024-12-16 22:27:15.310504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:09.155 [2024-12-16 22:27:15.310517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.155 [2024-12-16 22:27:15.310715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 56.792 ms, result 0 00:31:09.416 00:31:09.416 00:31:09.416 22:27:15 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:09.416 [2024-12-16 22:27:15.694566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:09.416 [2024-12-16 22:27:15.694708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97231 ] 00:31:09.676 [2024-12-16 22:27:15.853642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.677 [2024-12-16 22:27:15.880523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.677 [2024-12-16 22:27:15.999159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:09.677 [2024-12-16 22:27:15.999334] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:09.938 [2024-12-16 22:27:16.163084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.938 [2024-12-16 22:27:16.163352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:09.938 [2024-12-16 22:27:16.163519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:09.938 [2024-12-16 22:27:16.163548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.938 [2024-12-16 22:27:16.163648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.938 [2024-12-16 22:27:16.163684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:09.938 [2024-12-16 22:27:16.163707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:09.938 [2024-12-16 22:27:16.163727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.938 [2024-12-16 22:27:16.163773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:09.938 [2024-12-16 22:27:16.164264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:09.938 [2024-12-16 22:27:16.164353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.938 [2024-12-16 22:27:16.164380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:09.938 [2024-12-16 22:27:16.164408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:31:09.938 [2024-12-16 22:27:16.164428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.938 [2024-12-16 22:27:16.165155] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:09.938 [2024-12-16 22:27:16.165270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.938 [2024-12-16 22:27:16.165296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:09.938 [2024-12-16 22:27:16.165319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:31:09.938 [2024-12-16 22:27:16.165347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.165439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.165525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:09.939 [2024-12-16 22:27:16.165553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:09.939 [2024-12-16 22:27:16.165572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.165898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.165933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:09.939 [2024-12-16 22:27:16.165956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:31:09.939 [2024-12-16 22:27:16.165996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.166239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.166274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:09.939 [2024-12-16 22:27:16.166299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:31:09.939 [2024-12-16 22:27:16.166318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.166367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.166390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:09.939 [2024-12-16 22:27:16.166412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:09.939 [2024-12-16 22:27:16.166432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.166468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:09.939 [2024-12-16 22:27:16.169407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.169595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:09.939 [2024-12-16 22:27:16.169826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:31:09.939 [2024-12-16 22:27:16.169901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.169973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.170086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:09.939 [2024-12-16 22:27:16.170108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:09.939 [2024-12-16 22:27:16.170181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.170265] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:09.939 [2024-12-16 22:27:16.170317] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:09.939 [2024-12-16 22:27:16.170383] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:09.939 [2024-12-16 22:27:16.170424] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:09.939 [2024-12-16 22:27:16.170559] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:09.939 [2024-12-16 22:27:16.170749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:09.939 [2024-12-16 22:27:16.170786] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:09.939 [2024-12-16 22:27:16.170819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:09.939 [2024-12-16 22:27:16.170894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:09.939 [2024-12-16 22:27:16.170925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:09.939 [2024-12-16 22:27:16.170945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:09.939 [2024-12-16 22:27:16.170965] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:09.939 [2024-12-16 22:27:16.170990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:09.939 [2024-12-16 22:27:16.171025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.171051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:09.939 [2024-12-16 22:27:16.171071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:31:09.939 [2024-12-16 22:27:16.171089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.171260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.939 [2024-12-16 22:27:16.171345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:09.939 [2024-12-16 22:27:16.171395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:09.939 [2024-12-16 22:27:16.171417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.939 [2024-12-16 22:27:16.171544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:09.939 [2024-12-16 22:27:16.171579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:09.939 [2024-12-16 22:27:16.171602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.939 [2024-12-16 22:27:16.171624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.171644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:09.939 [2024-12-16 22:27:16.171671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.171724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:09.939 [2024-12-16 22:27:16.171785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:09.939 [2024-12-16 22:27:16.171807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:09.939 [2024-12-16 22:27:16.171879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.939 [2024-12-16 22:27:16.171902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:09.939 [2024-12-16 22:27:16.171921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:09.939 [2024-12-16 22:27:16.171941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:09.939 [2024-12-16 22:27:16.171960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:09.939 [2024-12-16 22:27:16.171978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:09.939 [2024-12-16 22:27:16.171996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:09.939 [2024-12-16 22:27:16.172034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:09.939 [2024-12-16 22:27:16.172132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:09.939 [2024-12-16 22:27:16.172190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:09.939 [2024-12-16 22:27:16.172283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:09.939 [2024-12-16 22:27:16.172346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:09.939 [2024-12-16 22:27:16.172405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.939 [2024-12-16 22:27:16.172511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:09.939 [2024-12-16 22:27:16.172541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:09.939 [2024-12-16 22:27:16.172568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:09.939 [2024-12-16 22:27:16.172587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:09.939 [2024-12-16 22:27:16.172623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:09.939 [2024-12-16 22:27:16.172648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:09.939 [2024-12-16 22:27:16.172695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:09.939 [2024-12-16 22:27:16.172723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172742] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:09.939 [2024-12-16 22:27:16.172773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:09.939 [2024-12-16 22:27:16.172793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:09.939 [2024-12-16 22:27:16.172861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:09.939 [2024-12-16 22:27:16.172880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:09.939 [2024-12-16 22:27:16.172900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:09.939 [2024-12-16 22:27:16.172922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:09.939 [2024-12-16 22:27:16.172942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:09.939 [2024-12-16 22:27:16.172959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:09.939 [2024-12-16 22:27:16.172980] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:09.939 [2024-12-16 22:27:16.173098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.939 [2024-12-16 22:27:16.173137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:09.939 [2024-12-16 22:27:16.173169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:09.939 [2024-12-16 22:27:16.173197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:09.939 [2024-12-16 22:27:16.173227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:09.940 [2024-12-16 22:27:16.173259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:09.940 [2024-12-16 22:27:16.173288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:09.940 [2024-12-16 22:27:16.173317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:09.940 [2024-12-16 22:27:16.173346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:09.940 [2024-12-16 22:27:16.173374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:09.940 [2024-12-16 22:27:16.173405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:09.940 [2024-12-16 22:27:16.173602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:09.940 [2024-12-16 22:27:16.173635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:09.940 [2024-12-16 22:27:16.173728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:09.940 [2024-12-16 22:27:16.173759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:09.940 [2024-12-16 22:27:16.173787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:09.940 [2024-12-16 22:27:16.173819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.173855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:09.940 [2024-12-16 22:27:16.173877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:31:09.940 [2024-12-16 22:27:16.173924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.188386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.188557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.940 [2024-12-16 22:27:16.188586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.358 ms 00:31:09.940 [2024-12-16 22:27:16.188597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.188695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.188705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:09.940 [2024-12-16 22:27:16.188720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:09.940 [2024-12-16 22:27:16.188730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.217241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.217321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.940 [2024-12-16 22:27:16.217347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.443 ms 00:31:09.940 [2024-12-16 22:27:16.217364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.217440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.217459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.940 [2024-12-16 22:27:16.217485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.940 [2024-12-16 22:27:16.217501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.217674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.217701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.940 [2024-12-16 22:27:16.217718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:09.940 [2024-12-16 22:27:16.217734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.217993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.218019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.940 [2024-12-16 22:27:16.218066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:31:09.940 [2024-12-16 22:27:16.218082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.229658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.229901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.940 [2024-12-16 22:27:16.229943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.539 ms 00:31:09.940 [2024-12-16 22:27:16.229952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.230112] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:09.940 [2024-12-16 22:27:16.230131] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:09.940 [2024-12-16 22:27:16.230142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.230152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:09.940 [2024-12-16 22:27:16.230165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:09.940 [2024-12-16 22:27:16.230176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.242567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.242629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:09.940 [2024-12-16 22:27:16.242649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:31:09.940 [2024-12-16 22:27:16.242657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.242848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.242861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:09.940 [2024-12-16 22:27:16.242870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:31:09.940 [2024-12-16 22:27:16.242885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.242938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.242956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:09.940 [2024-12-16 22:27:16.242966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:09.940 [2024-12-16 22:27:16.242975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.243304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.243319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:09.940 [2024-12-16 22:27:16.243335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:31:09.940 [2024-12-16 22:27:16.243345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.243364] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:09.940 [2024-12-16 22:27:16.243377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.243395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:09.940 [2024-12-16 22:27:16.243404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:09.940 [2024-12-16 22:27:16.243412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.255375] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:09.940 [2024-12-16 22:27:16.255545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.255557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:09.940 [2024-12-16 22:27:16.255569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.114 ms 00:31:09.940 [2024-12-16 22:27:16.255582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.258083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.258119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:09.940 [2024-12-16 22:27:16.258130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:31:09.940 [2024-12-16 22:27:16.258141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.258276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.258289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:09.940 [2024-12-16 22:27:16.258299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:09.940 [2024-12-16 22:27:16.258318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.258343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.258352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:09.940 [2024-12-16 22:27:16.258360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.940 [2024-12-16 22:27:16.258368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.258419] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:09.940 [2024-12-16 22:27:16.258436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.258444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:09.940 [2024-12-16 22:27:16.258452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:09.940 [2024-12-16 22:27:16.258462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.266409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.266466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:09.940 [2024-12-16 22:27:16.266489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.923 ms 00:31:09.940 [2024-12-16 22:27:16.266498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.266590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.940 [2024-12-16 22:27:16.266606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:09.940 [2024-12-16 22:27:16.266620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:09.940 [2024-12-16 22:27:16.266629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.940 [2024-12-16 22:27:16.268109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.484 ms, result 0 00:31:11.446  [2024-12-16T22:27:18.731Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-16T22:27:19.666Z] Copying: 35/1024 [MB] (18 MBps) [2024-12-16T22:27:20.601Z] Copying: 47/1024 [MB] (12 MBps) [2024-12-16T22:27:21.535Z] Copying: 59/1024 [MB] (12 MBps) [2024-12-16T22:27:22.470Z] Copying: 71/1024 [MB] (11 MBps) [2024-12-16T22:27:23.845Z] Copying: 85/1024 [MB] (14 MBps) [2024-12-16T22:27:24.780Z] Copying: 97/1024 [MB] (11 MBps) [2024-12-16T22:27:25.715Z] Copying: 109/1024 [MB] (11 MBps) [2024-12-16T22:27:26.649Z] Copying: 121/1024 [MB] (12 MBps) [2024-12-16T22:27:27.586Z] Copying: 134/1024 [MB] (12 MBps) [2024-12-16T22:27:28.528Z] Copying: 146/1024 [MB] (12 MBps) [2024-12-16T22:27:29.465Z] Copying: 158/1024 [MB] (11 MBps) [2024-12-16T22:27:30.848Z] Copying: 170/1024 [MB] (11 MBps) [2024-12-16T22:27:31.789Z] Copying: 181/1024 [MB] (11 MBps) [2024-12-16T22:27:32.725Z] Copying: 191/1024 [MB] (10 MBps) [2024-12-16T22:27:33.666Z] Copying: 203/1024 [MB] (11 MBps) [2024-12-16T22:27:34.608Z] Copying: 214/1024 [MB] (11 MBps) [2024-12-16T22:27:35.549Z] Copying: 225/1024 [MB] (11 MBps) [2024-12-16T22:27:36.484Z] Copying: 236/1024 [MB] (10 MBps) [2024-12-16T22:27:37.861Z] Copying: 247/1024 [MB] (11 MBps) [2024-12-16T22:27:38.796Z] Copying: 259/1024 [MB] (11 MBps) [2024-12-16T22:27:39.733Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-16T22:27:40.669Z] Copying: 283/1024 [MB] (11 MBps) [2024-12-16T22:27:41.611Z] Copying: 294/1024 [MB] (11 MBps) [2024-12-16T22:27:42.548Z] Copying: 306/1024 [MB] (11 MBps) [2024-12-16T22:27:43.490Z] Copying: 317/1024 [MB] (11 MBps) [2024-12-16T22:27:44.878Z] Copying: 331/1024 [MB] (13 MBps) [2024-12-16T22:27:45.815Z] Copying: 342/1024 [MB] (10 MBps) [2024-12-16T22:27:46.751Z] Copying: 354/1024 [MB] (11 MBps) [2024-12-16T22:27:47.688Z] Copying: 366/1024 [MB] (12 MBps) [2024-12-16T22:27:48.624Z] Copying: 378/1024 [MB] (11 MBps) [2024-12-16T22:27:49.560Z] Copying: 390/1024 [MB] (11 MBps) [2024-12-16T22:27:50.496Z] Copying: 401/1024 [MB] (11 MBps) [2024-12-16T22:27:51.872Z] Copying: 413/1024 [MB] (12 MBps) [2024-12-16T22:27:52.809Z] Copying: 427/1024 [MB] (13 MBps) [2024-12-16T22:27:53.745Z] Copying: 441/1024 [MB] (13 MBps) [2024-12-16T22:27:54.682Z] Copying: 453/1024 [MB] (12 MBps) [2024-12-16T22:27:55.681Z] Copying: 465/1024 [MB] (12 MBps) [2024-12-16T22:27:56.641Z] Copying: 477/1024 [MB] (12 MBps) [2024-12-16T22:27:57.577Z] Copying: 489/1024 [MB] (12 MBps) [2024-12-16T22:27:58.513Z] Copying: 501/1024 [MB] (12 MBps) [2024-12-16T22:27:59.889Z] Copying: 513/1024 [MB] (12 MBps) [2024-12-16T22:28:00.825Z] Copying: 526/1024 [MB] (12 MBps) [2024-12-16T22:28:01.759Z] Copying: 538/1024 [MB] (11 MBps) [2024-12-16T22:28:02.693Z] Copying: 550/1024 [MB] (12 MBps) [2024-12-16T22:28:03.629Z] Copying: 562/1024 [MB] (12 MBps) [2024-12-16T22:28:04.565Z] Copying: 574/1024 [MB] (11 MBps) [2024-12-16T22:28:05.500Z] Copying: 586/1024 [MB] (12 MBps) [2024-12-16T22:28:06.881Z] Copying: 598/1024 [MB] (11 MBps) [2024-12-16T22:28:07.817Z] Copying: 610/1024 [MB] (12 MBps) [2024-12-16T22:28:08.753Z] Copying: 622/1024 [MB] (11 MBps) [2024-12-16T22:28:09.689Z] Copying: 634/1024 [MB] (12 MBps) [2024-12-16T22:28:10.624Z] Copying: 646/1024 [MB] (11 MBps) [2024-12-16T22:28:11.559Z] Copying: 658/1024 [MB] (11 MBps) [2024-12-16T22:28:12.495Z] Copying: 670/1024 [MB] (11 MBps) [2024-12-16T22:28:13.876Z] Copying: 682/1024 [MB] (11 MBps) [2024-12-16T22:28:14.815Z] Copying: 694/1024 [MB] (11 MBps) [2024-12-16T22:28:15.756Z] Copying: 704/1024 [MB] (10 MBps) [2024-12-16T22:28:16.695Z] Copying: 716/1024 [MB] (11 MBps) [2024-12-16T22:28:17.630Z] Copying: 727/1024 [MB] (10 MBps) [2024-12-16T22:28:18.566Z] Copying: 739/1024 [MB] (11 MBps) [2024-12-16T22:28:19.499Z] Copying: 751/1024 [MB] (11 MBps) [2024-12-16T22:28:20.522Z] Copying: 762/1024 [MB] (11 MBps) [2024-12-16T22:28:21.897Z] Copying: 774/1024 [MB] (11 MBps) [2024-12-16T22:28:22.464Z] Copying: 786/1024 [MB] (11 MBps) [2024-12-16T22:28:23.839Z] Copying: 798/1024 [MB] (12 MBps) [2024-12-16T22:28:24.774Z] Copying: 810/1024 [MB] (11 MBps) [2024-12-16T22:28:25.709Z] Copying: 822/1024 [MB] (12 MBps) [2024-12-16T22:28:26.644Z] Copying: 835/1024 [MB] (12 MBps) [2024-12-16T22:28:27.580Z] Copying: 847/1024 [MB] (12 MBps) [2024-12-16T22:28:28.522Z] Copying: 860/1024 [MB] (12 MBps) [2024-12-16T22:28:29.901Z] Copying: 870/1024 [MB] (10 MBps) [2024-12-16T22:28:30.468Z] Copying: 882/1024 [MB] (11 MBps) [2024-12-16T22:28:31.850Z] Copying: 893/1024 [MB] (10 MBps) [2024-12-16T22:28:32.786Z] Copying: 904/1024 [MB] (11 MBps) [2024-12-16T22:28:33.725Z] Copying: 915/1024 [MB] (11 MBps) [2024-12-16T22:28:34.666Z] Copying: 927/1024 [MB] (11 MBps) [2024-12-16T22:28:35.606Z] Copying: 938/1024 [MB] (11 MBps) [2024-12-16T22:28:36.545Z] Copying: 949/1024 [MB] (10 MBps) [2024-12-16T22:28:37.485Z] Copying: 960/1024 [MB] (10 MBps) [2024-12-16T22:28:38.859Z] Copying: 971/1024 [MB] (10 MBps) [2024-12-16T22:28:39.793Z] Copying: 983/1024 [MB] (11 MBps) [2024-12-16T22:28:40.731Z] Copying: 994/1024 [MB] (11 MBps) [2024-12-16T22:28:41.673Z] Copying: 1006/1024 [MB] (11 MBps) [2024-12-16T22:28:42.246Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-16T22:28:42.246Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-16 22:28:42.101800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.899 [2024-12-16 22:28:42.101873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:35.899 [2024-12-16 22:28:42.101894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:35.899 [2024-12-16 22:28:42.101904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.899 [2024-12-16 22:28:42.101931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.899 [2024-12-16 22:28:42.102429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.899 [2024-12-16 22:28:42.102456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:35.899 [2024-12-16 22:28:42.102465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:32:35.899 [2024-12-16 22:28:42.102473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.899 [2024-12-16 22:28:42.102695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.899 [2024-12-16 22:28:42.102704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:35.899 [2024-12-16 22:28:42.102712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:32:35.899 [2024-12-16 22:28:42.102720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.899 [2024-12-16 22:28:42.102777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.899 [2024-12-16 22:28:42.102787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:35.899 [2024-12-16 22:28:42.102795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:35.899 [2024-12-16 22:28:42.102802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.899 [2024-12-16 22:28:42.102863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.899 [2024-12-16 22:28:42.102872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:35.899 [2024-12-16 22:28:42.102880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:35.899 [2024-12-16 22:28:42.102887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.899 [2024-12-16 22:28:42.102900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:35.899 [2024-12-16 22:28:42.102911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.102997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:35.899 [2024-12-16 22:28:42.103483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:35.900 [2024-12-16 22:28:42.103811] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:35.900 [2024-12-16 22:28:42.103820] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:32:35.900 [2024-12-16 22:28:42.103829] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:35.900 [2024-12-16 22:28:42.103852] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:35.900 [2024-12-16 22:28:42.103862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:35.900 [2024-12-16 22:28:42.103875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:35.900 [2024-12-16 22:28:42.103884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:35.900 [2024-12-16 22:28:42.103893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:35.900 [2024-12-16 22:28:42.103901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:35.900 [2024-12-16 22:28:42.103909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:35.900 [2024-12-16 22:28:42.103916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:35.900 [2024-12-16 22:28:42.103925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.900 [2024-12-16 22:28:42.103937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:35.900 [2024-12-16 22:28:42.103946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:32:35.900 [2024-12-16 22:28:42.103957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.106171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.900 [2024-12-16 22:28:42.106196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:35.900 [2024-12-16 22:28:42.106215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.195 ms 00:32:35.900 [2024-12-16 22:28:42.106225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.106311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.900 [2024-12-16 22:28:42.106321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:35.900 [2024-12-16 22:28:42.106335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:35.900 [2024-12-16 22:28:42.106344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.112380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.112544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.900 [2024-12-16 22:28:42.112561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.112570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.112632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.112642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.900 [2024-12-16 22:28:42.112656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.112664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.112722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.112734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.900 [2024-12-16 22:28:42.112742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.112751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.112767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.112781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.900 [2024-12-16 22:28:42.112789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.112800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.122458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.122507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.900 [2024-12-16 22:28:42.122517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.122524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.130656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.130696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.900 [2024-12-16 22:28:42.130713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.130721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.130744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.130760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.900 [2024-12-16 22:28:42.130768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.130775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.130817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.130826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.900 [2024-12-16 22:28:42.130834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.130868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.130918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.130941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.900 [2024-12-16 22:28:42.130949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.130957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.130979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.130988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:35.900 [2024-12-16 22:28:42.130996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.131003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.131040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.131053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.900 [2024-12-16 22:28:42.131064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.131072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.900 [2024-12-16 22:28:42.131109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.900 [2024-12-16 22:28:42.131122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.900 [2024-12-16 22:28:42.131130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.900 [2024-12-16 22:28:42.131137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.901 [2024-12-16 22:28:42.131252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 29.430 ms, result 0 00:32:36.160 00:32:36.160 00:32:36.160 22:28:42 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:38.706 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:38.706 22:28:44 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:38.706 [2024-12-16 22:28:44.495631] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:38.706 [2024-12-16 22:28:44.495732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98121 ] 00:32:38.706 [2024-12-16 22:28:44.646714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.706 [2024-12-16 22:28:44.668558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.706 [2024-12-16 22:28:44.764523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:38.706 [2024-12-16 22:28:44.764600] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:38.706 [2024-12-16 22:28:44.926081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.926319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:38.706 [2024-12-16 22:28:44.926345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:38.706 [2024-12-16 22:28:44.926355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.926432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.926444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:38.706 [2024-12-16 22:28:44.926453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:38.706 [2024-12-16 22:28:44.926461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.926492] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:38.706 [2024-12-16 22:28:44.926775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:38.706 [2024-12-16 22:28:44.926793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.926804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:38.706 [2024-12-16 22:28:44.926817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:32:38.706 [2024-12-16 22:28:44.926826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927162] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:38.706 [2024-12-16 22:28:44.927195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.927205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:38.706 [2024-12-16 22:28:44.927216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:38.706 [2024-12-16 22:28:44.927229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.927304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:38.706 [2024-12-16 22:28:44.927312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:38.706 [2024-12-16 22:28:44.927320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.927600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:38.706 [2024-12-16 22:28:44.927615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:32:38.706 [2024-12-16 22:28:44.927626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.927779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:38.706 [2024-12-16 22:28:44.927787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:38.706 [2024-12-16 22:28:44.927795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.927834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:38.706 [2024-12-16 22:28:44.927861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:38.706 [2024-12-16 22:28:44.927869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.927896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:38.706 [2024-12-16 22:28:44.930078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.930131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:38.706 [2024-12-16 22:28:44.930143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:32:38.706 [2024-12-16 22:28:44.930151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.930191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.930201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:38.706 [2024-12-16 22:28:44.930210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:38.706 [2024-12-16 22:28:44.930219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.930275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:38.706 [2024-12-16 22:28:44.930304] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:38.706 [2024-12-16 22:28:44.930345] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:38.706 [2024-12-16 22:28:44.930363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:38.706 [2024-12-16 22:28:44.930470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:38.706 [2024-12-16 22:28:44.930481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:38.706 [2024-12-16 22:28:44.930493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:38.706 [2024-12-16 22:28:44.930505] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:38.706 [2024-12-16 22:28:44.930519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:38.706 [2024-12-16 22:28:44.930527] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:38.706 [2024-12-16 22:28:44.930539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:38.706 [2024-12-16 22:28:44.930546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:38.706 [2024-12-16 22:28:44.930553] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:38.706 [2024-12-16 22:28:44.930561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.930573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:38.706 [2024-12-16 22:28:44.930581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:32:38.706 [2024-12-16 22:28:44.930588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.930671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.706 [2024-12-16 22:28:44.930683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:38.706 [2024-12-16 22:28:44.930690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:38.706 [2024-12-16 22:28:44.930697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.706 [2024-12-16 22:28:44.930828] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:38.706 [2024-12-16 22:28:44.930864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:38.706 [2024-12-16 22:28:44.930877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.706 [2024-12-16 22:28:44.930886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.706 [2024-12-16 22:28:44.930894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:38.706 [2024-12-16 22:28:44.930910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:38.706 [2024-12-16 22:28:44.930917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:38.706 [2024-12-16 22:28:44.930925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:38.706 [2024-12-16 22:28:44.930933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:38.706 [2024-12-16 22:28:44.930940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.706 [2024-12-16 22:28:44.930947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:38.706 [2024-12-16 22:28:44.930955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:38.706 [2024-12-16 22:28:44.930963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:38.706 [2024-12-16 22:28:44.930970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:38.706 [2024-12-16 22:28:44.930977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:38.706 [2024-12-16 22:28:44.930984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.706 [2024-12-16 22:28:44.930991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:38.706 [2024-12-16 22:28:44.930998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:38.706 [2024-12-16 22:28:44.931008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.706 [2024-12-16 22:28:44.931017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:38.706 [2024-12-16 22:28:44.931024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:38.706 [2024-12-16 22:28:44.931031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.706 [2024-12-16 22:28:44.931038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:38.706 [2024-12-16 22:28:44.931045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:38.706 [2024-12-16 22:28:44.931052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.707 [2024-12-16 22:28:44.931059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:38.707 [2024-12-16 22:28:44.931065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.707 [2024-12-16 22:28:44.931080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:38.707 [2024-12-16 22:28:44.931087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:38.707 [2024-12-16 22:28:44.931100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:38.707 [2024-12-16 22:28:44.931107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.707 [2024-12-16 22:28:44.931125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:38.707 [2024-12-16 22:28:44.931131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:38.707 [2024-12-16 22:28:44.931138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:38.707 [2024-12-16 22:28:44.931144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:38.707 [2024-12-16 22:28:44.931150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:38.707 [2024-12-16 22:28:44.931157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:38.707 [2024-12-16 22:28:44.931170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:38.707 [2024-12-16 22:28:44.931178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931187] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:38.707 [2024-12-16 22:28:44.931195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:38.707 [2024-12-16 22:28:44.931203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:38.707 [2024-12-16 22:28:44.931216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:38.707 [2024-12-16 22:28:44.931232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:38.707 [2024-12-16 22:28:44.931239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:38.707 [2024-12-16 22:28:44.931246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:38.707 [2024-12-16 22:28:44.931256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:38.707 [2024-12-16 22:28:44.931263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:38.707 [2024-12-16 22:28:44.931269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:38.707 [2024-12-16 22:28:44.931278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:38.707 [2024-12-16 22:28:44.931288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:38.707 [2024-12-16 22:28:44.931304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:38.707 [2024-12-16 22:28:44.931312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:38.707 [2024-12-16 22:28:44.931320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:38.707 [2024-12-16 22:28:44.931327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:38.707 [2024-12-16 22:28:44.931334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:38.707 [2024-12-16 22:28:44.931342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:38.707 [2024-12-16 22:28:44.931349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:38.707 [2024-12-16 22:28:44.931357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:38.707 [2024-12-16 22:28:44.931364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:38.707 [2024-12-16 22:28:44.931403] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:38.707 [2024-12-16 22:28:44.931412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:38.707 [2024-12-16 22:28:44.931431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:38.707 [2024-12-16 22:28:44.931438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:38.707 [2024-12-16 22:28:44.931445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:38.707 [2024-12-16 22:28:44.931455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.931463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:38.707 [2024-12-16 22:28:44.931471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:32:38.707 [2024-12-16 22:28:44.931478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.941651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.941872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:38.707 [2024-12-16 22:28:44.941893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.129 ms 00:32:38.707 [2024-12-16 22:28:44.941909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.941997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.942006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:38.707 [2024-12-16 22:28:44.942015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:38.707 [2024-12-16 22:28:44.942022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.964428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.964493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:38.707 [2024-12-16 22:28:44.964508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.342 ms 00:32:38.707 [2024-12-16 22:28:44.964517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.964568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.964580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:38.707 [2024-12-16 22:28:44.964590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:38.707 [2024-12-16 22:28:44.964600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.964717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.964735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:38.707 [2024-12-16 22:28:44.964751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:38.707 [2024-12-16 22:28:44.964760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.964931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.964944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:38.707 [2024-12-16 22:28:44.964958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:32:38.707 [2024-12-16 22:28:44.964967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.972692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.972901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:38.707 [2024-12-16 22:28:44.972935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.684 ms 00:32:38.707 [2024-12-16 22:28:44.972943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.973063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:38.707 [2024-12-16 22:28:44.973076] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:38.707 [2024-12-16 22:28:44.973087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.973100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:38.707 [2024-12-16 22:28:44.973108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:38.707 [2024-12-16 22:28:44.973119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.985417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.985459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:38.707 [2024-12-16 22:28:44.985471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.282 ms 00:32:38.707 [2024-12-16 22:28:44.985485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.985616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.985626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:38.707 [2024-12-16 22:28:44.985634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:38.707 [2024-12-16 22:28:44.985645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.985696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.985709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:38.707 [2024-12-16 22:28:44.985718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:38.707 [2024-12-16 22:28:44.985725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.707 [2024-12-16 22:28:44.986058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.707 [2024-12-16 22:28:44.986077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:38.708 [2024-12-16 22:28:44.986093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:32:38.708 [2024-12-16 22:28:44.986106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.986121] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:38.708 [2024-12-16 22:28:44.986132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.986143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:38.708 [2024-12-16 22:28:44.986151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:38.708 [2024-12-16 22:28:44.986171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.996024] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:38.708 [2024-12-16 22:28:44.996331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.996348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:38.708 [2024-12-16 22:28:44.996358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.142 ms 00:32:38.708 [2024-12-16 22:28:44.996371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.998907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.998942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:38.708 [2024-12-16 22:28:44.998952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.507 ms 00:32:38.708 [2024-12-16 22:28:44.998960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.999072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.999084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:38.708 [2024-12-16 22:28:44.999094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:38.708 [2024-12-16 22:28:44.999108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.999136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.999146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:38.708 [2024-12-16 22:28:44.999154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:38.708 [2024-12-16 22:28:44.999165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:44.999201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:38.708 [2024-12-16 22:28:44.999214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:44.999225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:38.708 [2024-12-16 22:28:44.999233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:38.708 [2024-12-16 22:28:44.999240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:45.006295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:45.006353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:38.708 [2024-12-16 22:28:45.006366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.030 ms 00:32:38.708 [2024-12-16 22:28:45.006374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:45.006470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:38.708 [2024-12-16 22:28:45.006480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:38.708 [2024-12-16 22:28:45.006488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:38.708 [2024-12-16 22:28:45.006501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:38.708 [2024-12-16 22:28:45.007740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.214 ms, result 0 00:32:40.088  [2024-12-16T22:28:47.379Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-16T22:28:48.322Z] Copying: 37/1024 [MB] (21 MBps) [2024-12-16T22:28:49.266Z] Copying: 78/1024 [MB] (41 MBps) [2024-12-16T22:28:50.210Z] Copying: 107/1024 [MB] (28 MBps) [2024-12-16T22:28:51.196Z] Copying: 128/1024 [MB] (21 MBps) [2024-12-16T22:28:52.172Z] Copying: 145/1024 [MB] (17 MBps) [2024-12-16T22:28:53.115Z] Copying: 180/1024 [MB] (34 MBps) [2024-12-16T22:28:54.058Z] Copying: 216/1024 [MB] (35 MBps) [2024-12-16T22:28:55.452Z] Copying: 233/1024 [MB] (16 MBps) [2024-12-16T22:28:56.024Z] Copying: 252/1024 [MB] (18 MBps) [2024-12-16T22:28:57.411Z] Copying: 267/1024 [MB] (15 MBps) [2024-12-16T22:28:58.354Z] Copying: 287/1024 [MB] (19 MBps) [2024-12-16T22:28:59.299Z] Copying: 302/1024 [MB] (14 MBps) [2024-12-16T22:29:00.233Z] Copying: 320/1024 [MB] (18 MBps) [2024-12-16T22:29:01.168Z] Copying: 342/1024 [MB] (22 MBps) [2024-12-16T22:29:02.112Z] Copying: 365/1024 [MB] (23 MBps) [2024-12-16T22:29:03.053Z] Copying: 381/1024 [MB] (15 MBps) [2024-12-16T22:29:04.427Z] Copying: 400/1024 [MB] (18 MBps) [2024-12-16T22:29:05.360Z] Copying: 426/1024 [MB] (25 MBps) [2024-12-16T22:29:06.299Z] Copying: 450/1024 [MB] (24 MBps) [2024-12-16T22:29:07.243Z] Copying: 467/1024 [MB] (17 MBps) [2024-12-16T22:29:08.188Z] Copying: 486/1024 [MB] (18 MBps) [2024-12-16T22:29:09.131Z] Copying: 500/1024 [MB] (14 MBps) [2024-12-16T22:29:10.076Z] Copying: 517/1024 [MB] (16 MBps) [2024-12-16T22:29:11.020Z] Copying: 531/1024 [MB] (14 MBps) [2024-12-16T22:29:12.400Z] Copying: 544/1024 [MB] (12 MBps) [2024-12-16T22:29:13.344Z] Copying: 559/1024 [MB] (14 MBps) [2024-12-16T22:29:14.284Z] Copying: 571/1024 [MB] (12 MBps) [2024-12-16T22:29:15.221Z] Copying: 582/1024 [MB] (10 MBps) [2024-12-16T22:29:16.164Z] Copying: 594/1024 [MB] (12 MBps) [2024-12-16T22:29:17.099Z] Copying: 604/1024 [MB] (10 MBps) [2024-12-16T22:29:18.040Z] Copying: 616/1024 [MB] (11 MBps) [2024-12-16T22:29:19.521Z] Copying: 630/1024 [MB] (13 MBps) [2024-12-16T22:29:20.093Z] Copying: 640/1024 [MB] (10 MBps) [2024-12-16T22:29:21.036Z] Copying: 650/1024 [MB] (10 MBps) [2024-12-16T22:29:22.421Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-16T22:29:23.364Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-16T22:29:24.306Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-16T22:29:25.249Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-16T22:29:26.190Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-16T22:29:27.134Z] Copying: 739/1024 [MB] (37 MBps) [2024-12-16T22:29:28.078Z] Copying: 752/1024 [MB] (12 MBps) [2024-12-16T22:29:29.029Z] Copying: 763/1024 [MB] (11 MBps) [2024-12-16T22:29:30.417Z] Copying: 777/1024 [MB] (13 MBps) [2024-12-16T22:29:31.361Z] Copying: 792/1024 [MB] (15 MBps) [2024-12-16T22:29:32.305Z] Copying: 810/1024 [MB] (17 MBps) [2024-12-16T22:29:33.248Z] Copying: 825/1024 [MB] (15 MBps) [2024-12-16T22:29:34.192Z] Copying: 847/1024 [MB] (22 MBps) [2024-12-16T22:29:35.136Z] Copying: 865/1024 [MB] (17 MBps) [2024-12-16T22:29:36.079Z] Copying: 876/1024 [MB] (10 MBps) [2024-12-16T22:29:37.023Z] Copying: 888/1024 [MB] (12 MBps) [2024-12-16T22:29:38.411Z] Copying: 898/1024 [MB] (10 MBps) [2024-12-16T22:29:39.354Z] Copying: 909/1024 [MB] (10 MBps) [2024-12-16T22:29:40.298Z] Copying: 919/1024 [MB] (10 MBps) [2024-12-16T22:29:41.243Z] Copying: 943/1024 [MB] (23 MBps) [2024-12-16T22:29:42.185Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-16T22:29:43.129Z] Copying: 988/1024 [MB] (34 MBps) [2024-12-16T22:29:44.072Z] Copying: 1023/1024 [MB] (34 MBps) [2024-12-16T22:29:44.072Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-16 22:29:43.752812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.725 [2024-12-16 22:29:43.752891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:37.725 [2024-12-16 22:29:43.752908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:37.725 [2024-12-16 22:29:43.752916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.725 [2024-12-16 22:29:43.756672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:37.725 [2024-12-16 22:29:43.758168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.725 [2024-12-16 22:29:43.758218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:37.725 [2024-12-16 22:29:43.758228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:33:37.725 [2024-12-16 22:29:43.758236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.725 [2024-12-16 22:29:43.768450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.725 [2024-12-16 22:29:43.768499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:37.725 [2024-12-16 22:29:43.768510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.388 ms 00:33:37.725 [2024-12-16 22:29:43.768518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.725 [2024-12-16 22:29:43.768544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.725 [2024-12-16 22:29:43.768553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:37.725 [2024-12-16 22:29:43.768562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:37.725 [2024-12-16 22:29:43.768570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.725 [2024-12-16 22:29:43.768625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.725 [2024-12-16 22:29:43.768643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:37.725 [2024-12-16 22:29:43.768654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:33:37.725 [2024-12-16 22:29:43.768662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.725 [2024-12-16 22:29:43.768675] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:37.725 [2024-12-16 22:29:43.768687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:33:37.725 [2024-12-16 22:29:43.768701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.768858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:37.725 [2024-12-16 22:29:43.769382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:37.726 [2024-12-16 22:29:43.769690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:37.726 [2024-12-16 22:29:43.769698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:33:37.726 [2024-12-16 22:29:43.769706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:33:37.726 [2024-12-16 22:29:43.769713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126496 00:33:37.726 [2024-12-16 22:29:43.769720] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:33:37.726 [2024-12-16 22:29:43.769728] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:33:37.726 [2024-12-16 22:29:43.769738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:37.726 [2024-12-16 22:29:43.769749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:37.726 [2024-12-16 22:29:43.769757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:37.726 [2024-12-16 22:29:43.769763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:37.726 [2024-12-16 22:29:43.769770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:37.726 [2024-12-16 22:29:43.769779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.726 [2024-12-16 22:29:43.769791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:37.726 [2024-12-16 22:29:43.769798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:33:37.726 [2024-12-16 22:29:43.769805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.771788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.726 [2024-12-16 22:29:43.771819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:37.726 [2024-12-16 22:29:43.771833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:33:37.726 [2024-12-16 22:29:43.772161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.772283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.726 [2024-12-16 22:29:43.772312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:37.726 [2024-12-16 22:29:43.772335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:33:37.726 [2024-12-16 22:29:43.772354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.778908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.779047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:37.726 [2024-12-16 22:29:43.779100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.779130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.779198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.779220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:37.726 [2024-12-16 22:29:43.779239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.779257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.779303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.779331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:37.726 [2024-12-16 22:29:43.779396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.779418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.779446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.779467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:37.726 [2024-12-16 22:29:43.779486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.779504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.792151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.792334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:37.726 [2024-12-16 22:29:43.792393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.792415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.803332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.803506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:37.726 [2024-12-16 22:29:43.803562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.803584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.803646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.803670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:37.726 [2024-12-16 22:29:43.803690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.803716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.803762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.726 [2024-12-16 22:29:43.803864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:37.726 [2024-12-16 22:29:43.803886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.726 [2024-12-16 22:29:43.803915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.726 [2024-12-16 22:29:43.803989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.727 [2024-12-16 22:29:43.804013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:37.727 [2024-12-16 22:29:43.804036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.727 [2024-12-16 22:29:43.804106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.727 [2024-12-16 22:29:43.804157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.727 [2024-12-16 22:29:43.804181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:37.727 [2024-12-16 22:29:43.804200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.727 [2024-12-16 22:29:43.804220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.727 [2024-12-16 22:29:43.804270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.727 [2024-12-16 22:29:43.804293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:37.727 [2024-12-16 22:29:43.804312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.727 [2024-12-16 22:29:43.804371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.727 [2024-12-16 22:29:43.804437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.727 [2024-12-16 22:29:43.804468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:37.727 [2024-12-16 22:29:43.804487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.727 [2024-12-16 22:29:43.804511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.727 [2024-12-16 22:29:43.804654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 51.812 ms, result 0 00:33:38.669 00:33:38.669 00:33:38.669 22:29:44 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:38.669 [2024-12-16 22:29:44.764192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:38.669 [2024-12-16 22:29:44.764544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98747 ] 00:33:38.669 [2024-12-16 22:29:44.927401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.669 [2024-12-16 22:29:44.947547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.930 [2024-12-16 22:29:45.054909] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:38.930 [2024-12-16 22:29:45.055229] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:38.930 [2024-12-16 22:29:45.216246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.216456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:38.930 [2024-12-16 22:29:45.216648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:38.930 [2024-12-16 22:29:45.216672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.216757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.216769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:38.930 [2024-12-16 22:29:45.216778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:38.930 [2024-12-16 22:29:45.216791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.216816] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:38.930 [2024-12-16 22:29:45.217116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:38.930 [2024-12-16 22:29:45.217134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.217148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:38.930 [2024-12-16 22:29:45.217161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:33:38.930 [2024-12-16 22:29:45.217169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.217443] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:38.930 [2024-12-16 22:29:45.217473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.217483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:38.930 [2024-12-16 22:29:45.217497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:38.930 [2024-12-16 22:29:45.217511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.217572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.217583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:38.930 [2024-12-16 22:29:45.217592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:38.930 [2024-12-16 22:29:45.217600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.218010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.218051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:38.930 [2024-12-16 22:29:45.218080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:33:38.930 [2024-12-16 22:29:45.218105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.218252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.218280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:38.930 [2024-12-16 22:29:45.218369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:38.930 [2024-12-16 22:29:45.218396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.218449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.218476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:38.930 [2024-12-16 22:29:45.218495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:38.930 [2024-12-16 22:29:45.218515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.218549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:38.930 [2024-12-16 22:29:45.220754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.220796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:38.930 [2024-12-16 22:29:45.220815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:33:38.930 [2024-12-16 22:29:45.220823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.220886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.220896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:38.930 [2024-12-16 22:29:45.220905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:38.930 [2024-12-16 22:29:45.220913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.220969] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:38.930 [2024-12-16 22:29:45.220998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:38.930 [2024-12-16 22:29:45.221034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:38.930 [2024-12-16 22:29:45.221051] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:38.930 [2024-12-16 22:29:45.221157] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:38.930 [2024-12-16 22:29:45.221169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:38.930 [2024-12-16 22:29:45.221180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:38.930 [2024-12-16 22:29:45.221191] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221207] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221215] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:38.930 [2024-12-16 22:29:45.221224] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:38.930 [2024-12-16 22:29:45.221232] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:38.930 [2024-12-16 22:29:45.221240] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:38.930 [2024-12-16 22:29:45.221247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.221255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:38.930 [2024-12-16 22:29:45.221266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:33:38.930 [2024-12-16 22:29:45.221274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.221357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.930 [2024-12-16 22:29:45.221404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:38.930 [2024-12-16 22:29:45.221413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:38.930 [2024-12-16 22:29:45.221421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.930 [2024-12-16 22:29:45.221518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:38.930 [2024-12-16 22:29:45.221534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:38.930 [2024-12-16 22:29:45.221547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:38.930 [2024-12-16 22:29:45.221575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:38.930 [2024-12-16 22:29:45.221596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:38.930 [2024-12-16 22:29:45.221615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:38.930 [2024-12-16 22:29:45.221622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:38.930 [2024-12-16 22:29:45.221628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:38.930 [2024-12-16 22:29:45.221635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:38.930 [2024-12-16 22:29:45.221642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:38.930 [2024-12-16 22:29:45.221649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:38.930 [2024-12-16 22:29:45.221663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:38.930 [2024-12-16 22:29:45.221688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:38.930 [2024-12-16 22:29:45.221708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:38.930 [2024-12-16 22:29:45.221731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:38.930 [2024-12-16 22:29:45.221751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:38.930 [2024-12-16 22:29:45.221771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:38.930 [2024-12-16 22:29:45.221784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:38.930 [2024-12-16 22:29:45.221790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:38.930 [2024-12-16 22:29:45.221797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:38.930 [2024-12-16 22:29:45.221804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:38.930 [2024-12-16 22:29:45.221811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:38.930 [2024-12-16 22:29:45.221817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:38.930 [2024-12-16 22:29:45.221830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:38.930 [2024-12-16 22:29:45.221855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:38.930 [2024-12-16 22:29:45.221869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:38.930 [2024-12-16 22:29:45.221877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.930 [2024-12-16 22:29:45.221895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:38.930 [2024-12-16 22:29:45.221901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:38.930 [2024-12-16 22:29:45.221909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:38.930 [2024-12-16 22:29:45.221917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:38.930 [2024-12-16 22:29:45.221924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:38.930 [2024-12-16 22:29:45.221931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:38.930 [2024-12-16 22:29:45.221940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:38.930 [2024-12-16 22:29:45.221950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.221959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:38.931 [2024-12-16 22:29:45.221967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:38.931 [2024-12-16 22:29:45.221975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:38.931 [2024-12-16 22:29:45.221985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:38.931 [2024-12-16 22:29:45.221993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:38.931 [2024-12-16 22:29:45.222000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:38.931 [2024-12-16 22:29:45.222008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:38.931 [2024-12-16 22:29:45.222015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:38.931 [2024-12-16 22:29:45.222023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:38.931 [2024-12-16 22:29:45.222031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:38.931 [2024-12-16 22:29:45.222072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:38.931 [2024-12-16 22:29:45.222081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:38.931 [2024-12-16 22:29:45.222099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:38.931 [2024-12-16 22:29:45.222108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:38.931 [2024-12-16 22:29:45.222119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:38.931 [2024-12-16 22:29:45.222127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.222135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:38.931 [2024-12-16 22:29:45.222144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:33:38.931 [2024-12-16 22:29:45.222152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.232059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.232212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:38.931 [2024-12-16 22:29:45.232271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.860 ms 00:33:38.931 [2024-12-16 22:29:45.232295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.232397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.232420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:38.931 [2024-12-16 22:29:45.232440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:38.931 [2024-12-16 22:29:45.232468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.253469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.253699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:38.931 [2024-12-16 22:29:45.253803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.931 ms 00:33:38.931 [2024-12-16 22:29:45.253868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.253951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.254049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:38.931 [2024-12-16 22:29:45.254086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:38.931 [2024-12-16 22:29:45.254126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.254341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.254457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:38.931 [2024-12-16 22:29:45.254529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:38.931 [2024-12-16 22:29:45.254561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.254765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.254956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:38.931 [2024-12-16 22:29:45.255031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:33:38.931 [2024-12-16 22:29:45.255063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.263299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.263450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:38.931 [2024-12-16 22:29:45.263523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.184 ms 00:33:38.931 [2024-12-16 22:29:45.263894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.931 [2024-12-16 22:29:45.264184] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:38.931 [2024-12-16 22:29:45.264293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:38.931 [2024-12-16 22:29:45.264331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.931 [2024-12-16 22:29:45.264352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:38.931 [2024-12-16 22:29:45.264374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:33:38.931 [2024-12-16 22:29:45.264397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.276763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.276923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:39.191 [2024-12-16 22:29:45.276983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.297 ms 00:33:39.191 [2024-12-16 22:29:45.277006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.277148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.277172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:39.191 [2024-12-16 22:29:45.277240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:39.191 [2024-12-16 22:29:45.277270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.277344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.277373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:39.191 [2024-12-16 22:29:45.277393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:39.191 [2024-12-16 22:29:45.277444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.277773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.277821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:39.191 [2024-12-16 22:29:45.277860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:33:39.191 [2024-12-16 22:29:45.277880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.277910] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:39.191 [2024-12-16 22:29:45.277941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.277969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:39.191 [2024-12-16 22:29:45.278055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:39.191 [2024-12-16 22:29:45.278078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.287475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:39.191 [2024-12-16 22:29:45.287737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.287770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:39.191 [2024-12-16 22:29:45.287833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.625 ms 00:33:39.191 [2024-12-16 22:29:45.287876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.290511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.290639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:39.191 [2024-12-16 22:29:45.290704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:33:39.191 [2024-12-16 22:29:45.290725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.290830] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:39.191 [2024-12-16 22:29:45.291540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.291636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:39.191 [2024-12-16 22:29:45.291697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:33:39.191 [2024-12-16 22:29:45.291718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.291760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.291793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:39.191 [2024-12-16 22:29:45.291820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:39.191 [2024-12-16 22:29:45.291935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.191 [2024-12-16 22:29:45.292013] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:39.191 [2024-12-16 22:29:45.292110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.191 [2024-12-16 22:29:45.292139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:39.192 [2024-12-16 22:29:45.292159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:33:39.192 [2024-12-16 22:29:45.292182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.192 [2024-12-16 22:29:45.298274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.192 [2024-12-16 22:29:45.298426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:39.192 [2024-12-16 22:29:45.298486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.031 ms 00:33:39.192 [2024-12-16 22:29:45.298517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.192 [2024-12-16 22:29:45.298742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.192 [2024-12-16 22:29:45.298775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:39.192 [2024-12-16 22:29:45.298785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:39.192 [2024-12-16 22:29:45.298794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.192 [2024-12-16 22:29:45.300033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.299 ms, result 0 00:33:40.577  [2024-12-16T22:29:47.496Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-16T22:29:48.923Z] Copying: 33/1024 [MB] (17 MBps) [2024-12-16T22:29:49.496Z] Copying: 51/1024 [MB] (17 MBps) [2024-12-16T22:29:50.883Z] Copying: 70/1024 [MB] (19 MBps) [2024-12-16T22:29:51.828Z] Copying: 94/1024 [MB] (24 MBps) [2024-12-16T22:29:52.774Z] Copying: 113/1024 [MB] (18 MBps) [2024-12-16T22:29:53.718Z] Copying: 132/1024 [MB] (18 MBps) [2024-12-16T22:29:54.662Z] Copying: 150/1024 [MB] (18 MBps) [2024-12-16T22:29:55.604Z] Copying: 172/1024 [MB] (22 MBps) [2024-12-16T22:29:56.549Z] Copying: 194/1024 [MB] (22 MBps) [2024-12-16T22:29:57.494Z] Copying: 211/1024 [MB] (16 MBps) [2024-12-16T22:29:58.881Z] Copying: 231/1024 [MB] (20 MBps) [2024-12-16T22:29:59.826Z] Copying: 252/1024 [MB] (20 MBps) [2024-12-16T22:30:00.770Z] Copying: 270/1024 [MB] (17 MBps) [2024-12-16T22:30:01.714Z] Copying: 295/1024 [MB] (25 MBps) [2024-12-16T22:30:02.657Z] Copying: 316/1024 [MB] (21 MBps) [2024-12-16T22:30:03.600Z] Copying: 345/1024 [MB] (28 MBps) [2024-12-16T22:30:04.543Z] Copying: 356/1024 [MB] (11 MBps) [2024-12-16T22:30:05.927Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-16T22:30:06.499Z] Copying: 379/1024 [MB] (11 MBps) [2024-12-16T22:30:07.887Z] Copying: 395/1024 [MB] (16 MBps) [2024-12-16T22:30:08.830Z] Copying: 416/1024 [MB] (20 MBps) [2024-12-16T22:30:09.773Z] Copying: 435/1024 [MB] (19 MBps) [2024-12-16T22:30:10.716Z] Copying: 461/1024 [MB] (25 MBps) [2024-12-16T22:30:11.659Z] Copying: 483/1024 [MB] (22 MBps) [2024-12-16T22:30:12.604Z] Copying: 495/1024 [MB] (12 MBps) [2024-12-16T22:30:13.549Z] Copying: 512/1024 [MB] (16 MBps) [2024-12-16T22:30:14.494Z] Copying: 542/1024 [MB] (30 MBps) [2024-12-16T22:30:15.880Z] Copying: 568/1024 [MB] (25 MBps) [2024-12-16T22:30:16.824Z] Copying: 583/1024 [MB] (14 MBps) [2024-12-16T22:30:17.836Z] Copying: 598/1024 [MB] (14 MBps) [2024-12-16T22:30:18.779Z] Copying: 608/1024 [MB] (10 MBps) [2024-12-16T22:30:19.722Z] Copying: 619/1024 [MB] (10 MBps) [2024-12-16T22:30:20.667Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-16T22:30:21.611Z] Copying: 640/1024 [MB] (10 MBps) [2024-12-16T22:30:22.555Z] Copying: 651/1024 [MB] (10 MBps) [2024-12-16T22:30:23.498Z] Copying: 661/1024 [MB] (10 MBps) [2024-12-16T22:30:24.884Z] Copying: 677/1024 [MB] (15 MBps) [2024-12-16T22:30:25.826Z] Copying: 689/1024 [MB] (12 MBps) [2024-12-16T22:30:26.771Z] Copying: 700/1024 [MB] (11 MBps) [2024-12-16T22:30:27.716Z] Copying: 711/1024 [MB] (11 MBps) [2024-12-16T22:30:28.660Z] Copying: 722/1024 [MB] (11 MBps) [2024-12-16T22:30:29.604Z] Copying: 733/1024 [MB] (10 MBps) [2024-12-16T22:30:30.549Z] Copying: 744/1024 [MB] (10 MBps) [2024-12-16T22:30:31.493Z] Copying: 755/1024 [MB] (10 MBps) [2024-12-16T22:30:32.881Z] Copying: 766/1024 [MB] (10 MBps) [2024-12-16T22:30:33.826Z] Copying: 781/1024 [MB] (15 MBps) [2024-12-16T22:30:34.771Z] Copying: 794/1024 [MB] (13 MBps) [2024-12-16T22:30:35.713Z] Copying: 805/1024 [MB] (10 MBps) [2024-12-16T22:30:36.657Z] Copying: 827/1024 [MB] (22 MBps) [2024-12-16T22:30:37.602Z] Copying: 845/1024 [MB] (17 MBps) [2024-12-16T22:30:38.547Z] Copying: 861/1024 [MB] (15 MBps) [2024-12-16T22:30:39.491Z] Copying: 876/1024 [MB] (15 MBps) [2024-12-16T22:30:40.878Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-16T22:30:41.823Z] Copying: 898/1024 [MB] (10 MBps) [2024-12-16T22:30:42.768Z] Copying: 908/1024 [MB] (10 MBps) [2024-12-16T22:30:43.713Z] Copying: 934/1024 [MB] (26 MBps) [2024-12-16T22:30:44.658Z] Copying: 945/1024 [MB] (10 MBps) [2024-12-16T22:30:45.612Z] Copying: 959/1024 [MB] (13 MBps) [2024-12-16T22:30:46.607Z] Copying: 983/1024 [MB] (24 MBps) [2024-12-16T22:30:47.550Z] Copying: 995/1024 [MB] (12 MBps) [2024-12-16T22:30:48.124Z] Copying: 1012/1024 [MB] (16 MBps) [2024-12-16T22:30:48.124Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-16 22:30:47.948941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.777 [2024-12-16 22:30:47.949069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:41.777 [2024-12-16 22:30:47.949145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:41.777 [2024-12-16 22:30:47.949175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.777 [2024-12-16 22:30:47.949208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:41.777 [2024-12-16 22:30:47.949735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.777 [2024-12-16 22:30:47.949821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:41.777 [2024-12-16 22:30:47.949862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:34:41.777 [2024-12-16 22:30:47.949886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.777 [2024-12-16 22:30:47.950075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.777 [2024-12-16 22:30:47.950100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:41.777 [2024-12-16 22:30:47.950117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:34:41.777 [2024-12-16 22:30:47.950133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.777 [2024-12-16 22:30:47.950207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.777 [2024-12-16 22:30:47.950231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:41.777 [2024-12-16 22:30:47.950260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:41.777 [2024-12-16 22:30:47.950278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.777 [2024-12-16 22:30:47.950333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.777 [2024-12-16 22:30:47.950395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:41.777 [2024-12-16 22:30:47.950415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:41.777 [2024-12-16 22:30:47.950431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.777 [2024-12-16 22:30:47.950451] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:41.777 [2024-12-16 22:30:47.950476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131584 / 261120 wr_cnt: 1 state: open 00:34:41.777 [2024-12-16 22:30:47.950537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.950967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:41.777 [2024-12-16 22:30:47.951747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.951989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.952995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:41.778 [2024-12-16 22:30:47.953061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:41.778 [2024-12-16 22:30:47.953073] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b65a023-bcff-4196-b4ae-6df41c46d80d 00:34:41.778 [2024-12-16 22:30:47.953079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131584 00:34:41.778 [2024-12-16 22:30:47.953085] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5152 00:34:41.778 [2024-12-16 22:30:47.953094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5120 00:34:41.778 [2024-12-16 22:30:47.953103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0063 00:34:41.778 [2024-12-16 22:30:47.953108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:41.778 [2024-12-16 22:30:47.953115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:41.778 [2024-12-16 22:30:47.953120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:41.778 [2024-12-16 22:30:47.953126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:41.778 [2024-12-16 22:30:47.953131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:41.778 [2024-12-16 22:30:47.953138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.778 [2024-12-16 22:30:47.953145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:41.778 [2024-12-16 22:30:47.953154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:34:41.778 [2024-12-16 22:30:47.953160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.954692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.778 [2024-12-16 22:30:47.954782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:41.778 [2024-12-16 22:30:47.954830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:34:41.778 [2024-12-16 22:30:47.954860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.954963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.778 [2024-12-16 22:30:47.955049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:41.778 [2024-12-16 22:30:47.955099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:41.778 [2024-12-16 22:30:47.955116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.959772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.778 [2024-12-16 22:30:47.959875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:41.778 [2024-12-16 22:30:47.959935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.778 [2024-12-16 22:30:47.959998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.960052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.778 [2024-12-16 22:30:47.960101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:41.778 [2024-12-16 22:30:47.960120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.778 [2024-12-16 22:30:47.960153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.960209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.778 [2024-12-16 22:30:47.960277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:41.778 [2024-12-16 22:30:47.960296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.778 [2024-12-16 22:30:47.960311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.778 [2024-12-16 22:30:47.960332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.778 [2024-12-16 22:30:47.960374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:41.778 [2024-12-16 22:30:47.960391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.778 [2024-12-16 22:30:47.960406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.968693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.968814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:41.779 [2024-12-16 22:30:47.968905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.968944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.975911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:41.779 [2024-12-16 22:30:47.976072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:41.779 [2024-12-16 22:30:47.976128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:41.779 [2024-12-16 22:30:47.976181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:41.779 [2024-12-16 22:30:47.976250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:41.779 [2024-12-16 22:30:47.976291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:41.779 [2024-12-16 22:30:47.976338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:41.779 [2024-12-16 22:30:47.976388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:41.779 [2024-12-16 22:30:47.976394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:41.779 [2024-12-16 22:30:47.976400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.779 [2024-12-16 22:30:47.976491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.532 ms, result 0 00:34:42.040 00:34:42.040 00:34:42.040 22:30:48 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:44.586 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96448 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96448 ']' 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96448 00:34:44.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96448) - No such process 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96448 is not found' 00:34:44.586 Process with pid 96448 is not found 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:44.586 Remove shared memory files 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_band_md /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_l2p_l1 /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_l2p_l2 /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_l2p_l2_ctx /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_nvc_md /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_p2l_pool /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_sb /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_sb_shm /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_trim_bitmap /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_trim_log /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_trim_md /dev/hugepages/ftl_1b65a023-bcff-4196-b4ae-6df41c46d80d_vmap 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:44.586 00:34:44.586 real 4m50.691s 00:34:44.586 user 4m38.597s 00:34:44.586 sys 0m11.786s 00:34:44.586 ************************************ 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.586 22:30:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:44.586 END TEST ftl_restore_fast 00:34:44.586 ************************************ 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@14 -- # killprocess 88047 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 88047 ']' 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@958 -- # kill -0 88047 00:34:44.586 Process with pid 88047 is not found 00:34:44.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88047) - No such process 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 88047 is not found' 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=99426 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@20 -- # waitforlisten 99426 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@835 -- # '[' -z 99426 ']' 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.586 22:30:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:44.586 22:30:50 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:44.586 [2024-12-16 22:30:50.664130] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:44.586 [2024-12-16 22:30:50.664279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99426 ] 00:34:44.586 [2024-12-16 22:30:50.818182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.586 [2024-12-16 22:30:50.846921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.528 22:30:51 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.528 22:30:51 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:45.528 22:30:51 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:45.528 nvme0n1 00:34:45.528 22:30:51 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:45.528 22:30:51 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:45.528 22:30:51 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:45.789 22:30:51 ftl -- ftl/common.sh@28 -- # stores=ef682119-1c64-4c9a-b476-0baa510cca34 00:34:45.789 22:30:51 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:45.789 22:30:51 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef682119-1c64-4c9a-b476-0baa510cca34 00:34:46.050 22:30:52 ftl -- ftl/ftl.sh@23 -- # killprocess 99426 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 99426 ']' 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@958 -- # kill -0 99426 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@959 -- # uname 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99426 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.050 killing process with pid 99426 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99426' 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@973 -- # kill 99426 00:34:46.050 22:30:52 ftl -- common/autotest_common.sh@978 -- # wait 99426 00:34:46.310 22:30:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:46.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:46.571 Waiting for block devices as requested 00:34:46.571 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:46.571 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:46.833 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:46.833 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:52.122 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:52.122 Remove shared memory files 00:34:52.122 22:30:58 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:52.122 22:30:58 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:52.122 22:30:58 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:52.122 22:30:58 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:52.122 22:30:58 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:52.122 22:30:58 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:52.122 22:30:58 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:52.122 ************************************ 00:34:52.122 END TEST ftl 00:34:52.122 ************************************ 00:34:52.122 00:34:52.122 real 17m4.299s 00:34:52.122 user 18m55.559s 00:34:52.122 sys 1m32.102s 00:34:52.122 22:30:58 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.122 22:30:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:52.122 22:30:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:52.122 22:30:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:52.122 22:30:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:52.122 22:30:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:52.122 22:30:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:52.122 22:30:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:52.122 22:30:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:52.122 22:30:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:52.122 22:30:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:52.122 22:30:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:52.122 22:30:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.122 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:34:52.122 22:30:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:52.122 22:30:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:52.122 22:30:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:52.122 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:34:53.509 INFO: APP EXITING 00:34:53.509 INFO: killing all VMs 00:34:53.509 INFO: killing vhost app 00:34:53.509 INFO: EXIT DONE 00:34:53.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:54.342 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:54.342 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:54.342 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:54.342 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:54.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:55.176 Cleaning 00:34:55.176 Removing: /var/run/dpdk/spdk0/config 00:34:55.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:55.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:55.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:55.176 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:55.176 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:55.176 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:55.176 Removing: /var/run/dpdk/spdk0 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71008 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71166 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71362 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71450 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71473 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71584 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71602 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71779 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71853 00:34:55.176 Removing: /var/run/dpdk/spdk_pid71937 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72032 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72113 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72147 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72183 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72254 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72365 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72785 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72827 00:34:55.176 Removing: /var/run/dpdk/spdk_pid72873 00:34:55.177 Removing: /var/run/dpdk/spdk_pid72884 00:34:55.177 Removing: /var/run/dpdk/spdk_pid72953 00:34:55.177 Removing: /var/run/dpdk/spdk_pid72958 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73016 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73032 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73074 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73092 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73134 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73152 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73279 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73316 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73399 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73560 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73632 00:34:55.177 Removing: /var/run/dpdk/spdk_pid73653 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74066 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74153 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74257 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74288 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74319 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74392 00:34:55.177 Removing: /var/run/dpdk/spdk_pid74995 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75026 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75487 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75575 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75673 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75710 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75735 00:34:55.177 Removing: /var/run/dpdk/spdk_pid75755 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77591 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77706 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77715 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77733 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77772 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77776 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77788 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77833 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77837 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77849 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77894 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77898 00:34:55.177 Removing: /var/run/dpdk/spdk_pid77910 00:34:55.177 Removing: /var/run/dpdk/spdk_pid79303 00:34:55.177 Removing: /var/run/dpdk/spdk_pid79389 00:34:55.177 Removing: /var/run/dpdk/spdk_pid80782 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82515 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82573 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82637 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82741 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82817 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82907 00:34:55.177 Removing: /var/run/dpdk/spdk_pid82965 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83029 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83128 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83214 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83304 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83356 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83427 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83521 00:34:55.177 Removing: /var/run/dpdk/spdk_pid83607 00:34:55.439 Removing: /var/run/dpdk/spdk_pid83692 00:34:55.439 Removing: /var/run/dpdk/spdk_pid83749 00:34:55.439 Removing: /var/run/dpdk/spdk_pid83820 00:34:55.439 Removing: /var/run/dpdk/spdk_pid83913 00:34:55.439 Removing: /var/run/dpdk/spdk_pid83999 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84084 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84136 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84209 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84273 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84343 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84436 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84516 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84605 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84657 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84726 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84789 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84852 00:34:55.439 Removing: /var/run/dpdk/spdk_pid84951 00:34:55.439 Removing: /var/run/dpdk/spdk_pid85037 00:34:55.439 Removing: /var/run/dpdk/spdk_pid85175 00:34:55.439 Removing: /var/run/dpdk/spdk_pid85443 00:34:55.439 Removing: /var/run/dpdk/spdk_pid85468 00:34:55.439 Removing: /var/run/dpdk/spdk_pid85910 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86087 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86177 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86283 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86320 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86345 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86656 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86694 00:34:55.439 Removing: /var/run/dpdk/spdk_pid86745 00:34:55.439 Removing: /var/run/dpdk/spdk_pid87114 00:34:55.439 Removing: /var/run/dpdk/spdk_pid87256 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88047 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88168 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88317 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88410 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88703 00:34:55.439 Removing: /var/run/dpdk/spdk_pid88962 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89315 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89470 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89629 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89665 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89852 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89866 00:34:55.439 Removing: /var/run/dpdk/spdk_pid89902 00:34:55.439 Removing: /var/run/dpdk/spdk_pid90161 00:34:55.439 Removing: /var/run/dpdk/spdk_pid90375 00:34:55.439 Removing: /var/run/dpdk/spdk_pid90918 00:34:55.439 Removing: /var/run/dpdk/spdk_pid91576 00:34:55.439 Removing: /var/run/dpdk/spdk_pid92070 00:34:55.439 Removing: /var/run/dpdk/spdk_pid92819 00:34:55.439 Removing: /var/run/dpdk/spdk_pid92950 00:34:55.439 Removing: /var/run/dpdk/spdk_pid93048 00:34:55.439 Removing: /var/run/dpdk/spdk_pid93655 00:34:55.439 Removing: /var/run/dpdk/spdk_pid93708 00:34:55.439 Removing: /var/run/dpdk/spdk_pid94360 00:34:55.439 Removing: /var/run/dpdk/spdk_pid94786 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95542 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95660 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95691 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95744 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95795 00:34:55.439 Removing: /var/run/dpdk/spdk_pid95844 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96037 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96106 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96163 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96223 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96253 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96315 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96448 00:34:55.439 Removing: /var/run/dpdk/spdk_pid96657 00:34:55.439 Removing: /var/run/dpdk/spdk_pid97231 00:34:55.439 Removing: /var/run/dpdk/spdk_pid98121 00:34:55.439 Removing: /var/run/dpdk/spdk_pid98747 00:34:55.439 Removing: /var/run/dpdk/spdk_pid99426 00:34:55.439 Clean 00:34:55.701 22:31:01 -- common/autotest_common.sh@1453 -- # return 0 00:34:55.701 22:31:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:55.701 22:31:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.701 22:31:01 -- common/autotest_common.sh@10 -- # set +x 00:34:55.701 22:31:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:55.701 22:31:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.701 22:31:01 -- common/autotest_common.sh@10 -- # set +x 00:34:55.701 22:31:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:55.701 22:31:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:55.701 22:31:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:55.701 22:31:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:55.701 22:31:01 -- spdk/autotest.sh@398 -- # hostname 00:34:55.701 22:31:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:55.963 geninfo: WARNING: invalid characters removed from testname! 00:35:22.546 22:31:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:24.462 22:31:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.008 22:31:33 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:30.312 22:31:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.861 22:31:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:35.491 22:31:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:37.429 22:31:43 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:37.429 22:31:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:37.429 22:31:43 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:37.429 22:31:43 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:37.429 22:31:43 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:37.430 22:31:43 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:37.430 + [[ -n 5756 ]] 00:35:37.430 + sudo kill 5756 00:35:37.443 [Pipeline] } 00:35:37.460 [Pipeline] // timeout 00:35:37.465 [Pipeline] } 00:35:37.479 [Pipeline] // stage 00:35:37.484 [Pipeline] } 00:35:37.498 [Pipeline] // catchError 00:35:37.508 [Pipeline] stage 00:35:37.510 [Pipeline] { (Stop VM) 00:35:37.523 [Pipeline] sh 00:35:37.808 + vagrant halt 00:35:40.354 ==> default: Halting domain... 00:35:45.662 [Pipeline] sh 00:35:45.947 + vagrant destroy -f 00:35:48.497 ==> default: Removing domain... 00:35:49.453 [Pipeline] sh 00:35:49.738 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:49.747 [Pipeline] } 00:35:49.762 [Pipeline] // stage 00:35:49.767 [Pipeline] } 00:35:49.780 [Pipeline] // dir 00:35:49.785 [Pipeline] } 00:35:49.800 [Pipeline] // wrap 00:35:49.805 [Pipeline] } 00:35:49.817 [Pipeline] // catchError 00:35:49.826 [Pipeline] stage 00:35:49.828 [Pipeline] { (Epilogue) 00:35:49.841 [Pipeline] sh 00:35:50.126 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:55.415 [Pipeline] catchError 00:35:55.416 [Pipeline] { 00:35:55.428 [Pipeline] sh 00:35:55.712 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:55.712 Artifacts sizes are good 00:35:55.722 [Pipeline] } 00:35:55.736 [Pipeline] // catchError 00:35:55.747 [Pipeline] archiveArtifacts 00:35:55.754 Archiving artifacts 00:35:55.875 [Pipeline] cleanWs 00:35:55.895 [WS-CLEANUP] Deleting project workspace... 00:35:55.895 [WS-CLEANUP] Deferred wipeout is used... 00:35:55.913 [WS-CLEANUP] done 00:35:55.915 [Pipeline] } 00:35:55.931 [Pipeline] // stage 00:35:55.936 [Pipeline] } 00:35:55.950 [Pipeline] // node 00:35:55.955 [Pipeline] End of Pipeline 00:35:55.996 Finished: SUCCESS