00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1008 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3670 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.141 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.142 The recommended git tool is: git 00:00:00.142 using credential 00000000-0000-0000-0000-000000000002 00:00:00.144 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.177 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.203 Using shallow fetch with depth 1 00:00:00.203 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.203 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.615 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.628 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.640 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.640 > git config core.sparsecheckout # timeout=10 00:00:05.651 > git read-tree -mu HEAD # timeout=10 00:00:05.667 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.689 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.690 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.797 [Pipeline] Start of Pipeline 00:00:05.812 [Pipeline] library 00:00:05.813 Loading library shm_lib@master 00:00:05.813 Library shm_lib@master is cached. Copying from home. 00:00:05.829 [Pipeline] node 00:00:05.842 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.843 [Pipeline] { 00:00:05.853 [Pipeline] catchError 00:00:05.854 [Pipeline] { 00:00:05.868 [Pipeline] wrap 00:00:05.877 [Pipeline] { 00:00:05.887 [Pipeline] stage 00:00:05.889 [Pipeline] { (Prologue) 00:00:05.904 [Pipeline] echo 00:00:05.906 Node: VM-host-SM38 00:00:05.911 [Pipeline] cleanWs 00:00:05.920 [WS-CLEANUP] Deleting project workspace... 00:00:05.920 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.927 [WS-CLEANUP] done 00:00:06.107 [Pipeline] setCustomBuildProperty 00:00:06.176 [Pipeline] httpRequest 00:00:07.163 [Pipeline] echo 00:00:07.164 Sorcerer 10.211.164.20 is alive 00:00:07.172 [Pipeline] retry 00:00:07.173 [Pipeline] { 00:00:07.183 [Pipeline] httpRequest 00:00:07.187 HttpMethod: GET 00:00:07.187 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.188 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.200 Response Code: HTTP/1.1 200 OK 00:00:07.200 Success: Status code 200 is in the accepted range: 200,404 00:00:07.201 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.194 [Pipeline] } 00:00:10.216 [Pipeline] // retry 00:00:10.227 [Pipeline] sh 00:00:10.519 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.537 [Pipeline] httpRequest 00:00:11.111 [Pipeline] echo 00:00:11.113 Sorcerer 10.211.164.20 is alive 00:00:11.123 [Pipeline] retry 00:00:11.125 [Pipeline] { 00:00:11.142 [Pipeline] httpRequest 00:00:11.147 HttpMethod: GET 00:00:11.148 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:11.148 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:00:11.164 Response Code: HTTP/1.1 200 OK 00:00:11.165 Success: Status code 200 is in the accepted range: 200,404 00:00:11.165 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:11.535 [Pipeline] } 00:01:11.554 [Pipeline] // retry 00:01:11.562 [Pipeline] sh 00:01:11.849 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:01:14.398 [Pipeline] sh 00:01:14.683 + git -C spdk log --oneline -n5 00:01:14.683 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:14.684 5592070b3 doc: update nvmf_tracing.md 00:01:14.684 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:14.684 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:14.684 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:01:14.707 [Pipeline] withCredentials 00:01:14.718 > git --version # timeout=10 00:01:14.728 > git --version # 'git version 2.39.2' 00:01:14.748 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:14.750 [Pipeline] { 00:01:14.763 [Pipeline] retry 00:01:14.766 [Pipeline] { 00:01:14.785 [Pipeline] sh 00:01:15.070 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:15.082 [Pipeline] } 00:01:15.104 [Pipeline] // retry 00:01:15.112 [Pipeline] } 00:01:15.129 [Pipeline] // withCredentials 00:01:15.139 [Pipeline] httpRequest 00:01:15.515 [Pipeline] echo 00:01:15.516 Sorcerer 10.211.164.20 is alive 00:01:15.526 [Pipeline] retry 00:01:15.528 [Pipeline] { 00:01:15.542 [Pipeline] httpRequest 00:01:15.547 HttpMethod: GET 00:01:15.548 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.548 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.556 Response Code: HTTP/1.1 200 OK 00:01:15.557 Success: Status code 200 is in the accepted range: 200,404 00:01:15.557 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.998 [Pipeline] } 00:01:54.020 [Pipeline] // retry 00:01:54.029 [Pipeline] sh 00:01:54.321 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.724 [Pipeline] sh 00:01:56.014 + git -C dpdk log --oneline -n5 00:01:56.014 eeb0605f11 version: 23.11.0 00:01:56.014 238778122a doc: update release notes for 23.11 00:01:56.014 46aa6b3cfc doc: fix description of RSS features 00:01:56.014 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:56.015 7e421ae345 devtools: support skipping forbid rule check 00:01:56.034 [Pipeline] writeFile 00:01:56.050 [Pipeline] sh 00:01:56.338 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:56.353 [Pipeline] sh 00:01:56.639 + cat autorun-spdk.conf 00:01:56.639 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.639 SPDK_TEST_NVME=1 00:01:56.639 SPDK_TEST_FTL=1 00:01:56.639 SPDK_TEST_ISAL=1 00:01:56.639 SPDK_RUN_ASAN=1 00:01:56.639 SPDK_RUN_UBSAN=1 00:01:56.639 SPDK_TEST_XNVME=1 00:01:56.639 SPDK_TEST_NVME_FDP=1 00:01:56.639 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.639 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.639 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.648 RUN_NIGHTLY=1 00:01:56.650 [Pipeline] } 00:01:56.664 [Pipeline] // stage 00:01:56.678 [Pipeline] stage 00:01:56.680 [Pipeline] { (Run VM) 00:01:56.692 [Pipeline] sh 00:01:56.977 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:56.977 + echo 'Start stage prepare_nvme.sh' 00:01:56.977 Start stage prepare_nvme.sh 00:01:56.977 + [[ -n 1 ]] 00:01:56.977 + disk_prefix=ex1 00:01:56.977 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:56.977 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:56.977 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:56.977 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.977 ++ SPDK_TEST_NVME=1 00:01:56.977 ++ SPDK_TEST_FTL=1 00:01:56.977 ++ SPDK_TEST_ISAL=1 00:01:56.977 ++ SPDK_RUN_ASAN=1 00:01:56.977 ++ SPDK_RUN_UBSAN=1 00:01:56.977 ++ SPDK_TEST_XNVME=1 00:01:56.977 ++ SPDK_TEST_NVME_FDP=1 00:01:56.977 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.977 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.977 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.977 ++ RUN_NIGHTLY=1 00:01:56.977 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:56.977 + nvme_files=() 00:01:56.977 + declare -A nvme_files 00:01:56.977 + backend_dir=/var/lib/libvirt/images/backends 00:01:56.977 + nvme_files['nvme.img']=5G 00:01:56.977 + nvme_files['nvme-cmb.img']=5G 00:01:56.977 + nvme_files['nvme-multi0.img']=4G 00:01:56.977 + nvme_files['nvme-multi1.img']=4G 00:01:56.977 + nvme_files['nvme-multi2.img']=4G 00:01:56.977 + nvme_files['nvme-openstack.img']=8G 00:01:56.977 + nvme_files['nvme-zns.img']=5G 00:01:56.977 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:56.977 + (( SPDK_TEST_FTL == 1 )) 00:01:56.977 + nvme_files["nvme-ftl.img"]=6G 00:01:56.977 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:56.977 + nvme_files["nvme-fdp.img"]=1G 00:01:56.977 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:56.977 + for nvme in "${!nvme_files[@]}" 00:01:56.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:56.977 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:56.977 + for nvme in "${!nvme_files[@]}" 00:01:56.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:57.549 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:57.549 + for nvme in "${!nvme_files[@]}" 00:01:57.549 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:57.816 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.816 + for nvme in "${!nvme_files[@]}" 00:01:57.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:57.816 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:57.816 + for nvme in "${!nvme_files[@]}" 00:01:57.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:57.816 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.816 + for nvme in "${!nvme_files[@]}" 00:01:57.816 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:58.090 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.090 + for nvme in "${!nvme_files[@]}" 00:01:58.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:58.090 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.090 + for nvme in "${!nvme_files[@]}" 00:01:58.090 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:58.352 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:58.352 + for nvme in "${!nvme_files[@]}" 00:01:58.352 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:58.925 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.925 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:58.925 + echo 'End stage prepare_nvme.sh' 00:01:58.925 End stage prepare_nvme.sh 00:01:58.939 [Pipeline] sh 00:01:59.226 + DISTRO=fedora39 00:01:59.226 + CPUS=10 00:01:59.226 + RAM=12288 00:01:59.226 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:59.226 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:59.226 00:01:59.226 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:59.226 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:59.226 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:59.226 HELP=0 00:01:59.226 DRY_RUN=0 00:01:59.226 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:59.226 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:59.226 NVME_AUTO_CREATE=0 00:01:59.226 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:59.226 NVME_CMB=,,,, 00:01:59.226 NVME_PMR=,,,, 00:01:59.226 NVME_ZNS=,,,, 00:01:59.226 NVME_MS=true,,,, 00:01:59.226 NVME_FDP=,,,on, 00:01:59.226 SPDK_VAGRANT_DISTRO=fedora39 00:01:59.226 SPDK_VAGRANT_VMCPU=10 00:01:59.226 SPDK_VAGRANT_VMRAM=12288 00:01:59.226 SPDK_VAGRANT_PROVIDER=libvirt 00:01:59.226 SPDK_VAGRANT_HTTP_PROXY= 00:01:59.226 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:59.226 SPDK_OPENSTACK_NETWORK=0 00:01:59.226 VAGRANT_PACKAGE_BOX=0 00:01:59.226 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:59.226 FORCE_DISTRO=true 00:01:59.226 VAGRANT_BOX_VERSION= 00:01:59.226 EXTRA_VAGRANTFILES= 00:01:59.226 NIC_MODEL=e1000 00:01:59.226 00:01:59.226 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:59.488 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:02.041 Bringing machine 'default' up with 'libvirt' provider... 00:02:02.041 ==> default: Creating image (snapshot of base box volume). 00:02:02.041 ==> default: Creating domain with the following settings... 00:02:02.041 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732667078_e4ce3319ff1f04564e12 00:02:02.041 ==> default: -- Domain type: kvm 00:02:02.041 ==> default: -- Cpus: 10 00:02:02.041 ==> default: -- Feature: acpi 00:02:02.041 ==> default: -- Feature: apic 00:02:02.041 ==> default: -- Feature: pae 00:02:02.041 ==> default: -- Memory: 12288M 00:02:02.041 ==> default: -- Memory Backing: hugepages: 00:02:02.041 ==> default: -- Management MAC: 00:02:02.041 ==> default: -- Loader: 00:02:02.041 ==> default: -- Nvram: 00:02:02.041 ==> default: -- Base box: spdk/fedora39 00:02:02.041 ==> default: -- Storage pool: default 00:02:02.041 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732667078_e4ce3319ff1f04564e12.img (20G) 00:02:02.041 ==> default: -- Volume Cache: default 00:02:02.041 ==> default: -- Kernel: 00:02:02.041 ==> default: -- Initrd: 00:02:02.041 ==> default: -- Graphics Type: vnc 00:02:02.041 ==> default: -- Graphics Port: -1 00:02:02.041 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.041 ==> default: -- Graphics Password: Not defined 00:02:02.041 ==> default: -- Video Type: cirrus 00:02:02.041 ==> default: -- Video VRAM: 9216 00:02:02.041 ==> default: -- Sound Type: 00:02:02.041 ==> default: -- Keymap: en-us 00:02:02.041 ==> default: -- TPM Path: 00:02:02.041 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.041 ==> default: -- Command line args: 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:02.041 ==> default: -> value=-drive, 00:02:02.041 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:02.041 ==> default: -> value=-device, 00:02:02.041 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.302 ==> default: Creating shared folders metadata... 00:02:02.302 ==> default: Starting domain. 00:02:04.224 ==> default: Waiting for domain to get an IP address... 00:02:19.133 ==> default: Waiting for SSH to become available... 00:02:20.520 ==> default: Configuring and enabling network interfaces... 00:02:24.732 default: SSH address: 192.168.121.171:22 00:02:24.732 default: SSH username: vagrant 00:02:24.732 default: SSH auth method: private key 00:02:27.275 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:35.422 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:40.716 ==> default: Mounting SSHFS shared folder... 00:02:42.627 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.627 ==> default: Checking Mount.. 00:02:44.013 ==> default: Folder Successfully Mounted! 00:02:44.013 00:02:44.013 SUCCESS! 00:02:44.013 00:02:44.013 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:44.013 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:44.013 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:44.013 00:02:44.025 [Pipeline] } 00:02:44.041 [Pipeline] // stage 00:02:44.051 [Pipeline] dir 00:02:44.051 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:44.053 [Pipeline] { 00:02:44.066 [Pipeline] catchError 00:02:44.068 [Pipeline] { 00:02:44.082 [Pipeline] sh 00:02:44.367 + vagrant ssh-config --host vagrant 00:02:44.367 + sed -ne '/^Host/,$p' 00:02:44.367 + tee ssh_conf 00:02:46.910 Host vagrant 00:02:46.910 HostName 192.168.121.171 00:02:46.910 User vagrant 00:02:46.910 Port 22 00:02:46.910 UserKnownHostsFile /dev/null 00:02:46.910 StrictHostKeyChecking no 00:02:46.910 PasswordAuthentication no 00:02:46.910 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:46.910 IdentitiesOnly yes 00:02:46.910 LogLevel FATAL 00:02:46.910 ForwardAgent yes 00:02:46.910 ForwardX11 yes 00:02:46.910 00:02:46.925 [Pipeline] withEnv 00:02:46.927 [Pipeline] { 00:02:46.941 [Pipeline] sh 00:02:47.225 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:47.225 source /etc/os-release 00:02:47.225 [[ -e /image.version ]] && img=$(< /image.version) 00:02:47.225 # Minimal, systemd-like check. 00:02:47.225 if [[ -e /.dockerenv ]]; then 00:02:47.225 # Clear garbage from the node'\''s name: 00:02:47.225 # agt-er_autotest_547-896 -> autotest_547-896 00:02:47.225 # $HOSTNAME is the actual container id 00:02:47.225 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:47.225 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:47.225 # We can assume this is a mount from a host where container is running, 00:02:47.225 # so fetch its hostname to easily identify the target swarm worker. 00:02:47.225 container="$(< /etc/hostname) ($agent)" 00:02:47.225 else 00:02:47.225 # Fallback 00:02:47.225 container=$agent 00:02:47.225 fi 00:02:47.225 fi 00:02:47.225 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:47.225 ' 00:02:47.496 [Pipeline] } 00:02:47.513 [Pipeline] // withEnv 00:02:47.521 [Pipeline] setCustomBuildProperty 00:02:47.537 [Pipeline] stage 00:02:47.540 [Pipeline] { (Tests) 00:02:47.557 [Pipeline] sh 00:02:47.841 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:48.116 [Pipeline] sh 00:02:48.401 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:48.679 [Pipeline] timeout 00:02:48.680 Timeout set to expire in 50 min 00:02:48.682 [Pipeline] { 00:02:48.697 [Pipeline] sh 00:02:49.068 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:49.637 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:49.649 [Pipeline] sh 00:02:49.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:50.208 [Pipeline] sh 00:02:50.492 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:50.774 [Pipeline] sh 00:02:51.057 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:51.317 ++ readlink -f spdk_repo 00:02:51.317 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:51.317 + [[ -n /home/vagrant/spdk_repo ]] 00:02:51.317 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:51.317 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:51.317 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:51.317 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:51.317 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:51.317 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:51.317 + cd /home/vagrant/spdk_repo 00:02:51.318 + source /etc/os-release 00:02:51.318 ++ NAME='Fedora Linux' 00:02:51.318 ++ VERSION='39 (Cloud Edition)' 00:02:51.318 ++ ID=fedora 00:02:51.318 ++ VERSION_ID=39 00:02:51.318 ++ VERSION_CODENAME= 00:02:51.318 ++ PLATFORM_ID=platform:f39 00:02:51.318 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:51.318 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:51.318 ++ LOGO=fedora-logo-icon 00:02:51.318 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:51.318 ++ HOME_URL=https://fedoraproject.org/ 00:02:51.318 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:51.318 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:51.318 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:51.318 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:51.318 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:51.318 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:51.318 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:51.318 ++ SUPPORT_END=2024-11-12 00:02:51.318 ++ VARIANT='Cloud Edition' 00:02:51.318 ++ VARIANT_ID=cloud 00:02:51.318 + uname -a 00:02:51.318 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:51.318 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:51.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:51.838 Hugepages 00:02:51.838 node hugesize free / total 00:02:51.838 node0 1048576kB 0 / 0 00:02:51.838 node0 2048kB 0 / 0 00:02:51.838 00:02:51.839 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.839 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:52.099 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:52.099 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:52.099 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:52.099 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:52.099 + rm -f /tmp/spdk-ld-path 00:02:52.099 + source autorun-spdk.conf 00:02:52.099 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.099 ++ SPDK_TEST_NVME=1 00:02:52.099 ++ SPDK_TEST_FTL=1 00:02:52.099 ++ SPDK_TEST_ISAL=1 00:02:52.099 ++ SPDK_RUN_ASAN=1 00:02:52.099 ++ SPDK_RUN_UBSAN=1 00:02:52.099 ++ SPDK_TEST_XNVME=1 00:02:52.099 ++ SPDK_TEST_NVME_FDP=1 00:02:52.099 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:52.099 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.099 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.099 ++ RUN_NIGHTLY=1 00:02:52.099 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:52.099 + [[ -n '' ]] 00:02:52.099 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:52.099 + for M in /var/spdk/build-*-manifest.txt 00:02:52.099 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:52.099 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.099 + for M in /var/spdk/build-*-manifest.txt 00:02:52.099 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:52.099 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.099 + for M in /var/spdk/build-*-manifest.txt 00:02:52.099 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:52.099 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.099 ++ uname 00:02:52.099 + [[ Linux == \L\i\n\u\x ]] 00:02:52.099 + sudo dmesg -T 00:02:52.099 + sudo dmesg --clear 00:02:52.099 + dmesg_pid=5761 00:02:52.099 + [[ Fedora Linux == FreeBSD ]] 00:02:52.099 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.099 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.099 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:52.099 + [[ -x /usr/src/fio-static/fio ]] 00:02:52.099 + sudo dmesg -Tw 00:02:52.099 + export FIO_BIN=/usr/src/fio-static/fio 00:02:52.099 + FIO_BIN=/usr/src/fio-static/fio 00:02:52.099 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:52.099 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:52.099 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:52.099 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.099 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.099 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:52.099 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.099 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.099 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.361 00:25:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:52.361 00:25:28 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.361 00:25:28 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:52.361 00:25:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:52.361 00:25:28 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.361 00:25:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:52.361 00:25:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.361 00:25:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:52.361 00:25:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:52.361 00:25:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.361 00:25:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.361 00:25:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.361 00:25:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.361 00:25:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.361 00:25:28 -- paths/export.sh@5 -- $ export PATH 00:02:52.361 00:25:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.361 00:25:28 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:52.361 00:25:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:52.361 00:25:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732667128.XXXXXX 00:02:52.361 00:25:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732667128.58ut7Y 00:02:52.361 00:25:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:52.361 00:25:28 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:52.361 00:25:28 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:52.361 00:25:28 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:52.361 00:25:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:52.361 00:25:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:52.361 00:25:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:52.361 00:25:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:52.361 00:25:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.361 00:25:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:52.361 00:25:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:52.361 00:25:28 -- pm/common@17 -- $ local monitor 00:02:52.361 00:25:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.361 00:25:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.361 00:25:29 -- pm/common@25 -- $ sleep 1 00:02:52.361 00:25:29 -- pm/common@21 -- $ date +%s 00:02:52.361 00:25:29 -- pm/common@21 -- $ date +%s 00:02:52.361 00:25:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732667129 00:02:52.361 00:25:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732667129 00:02:52.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732667129_collect-cpu-load.pm.log 00:02:52.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732667129_collect-vmstat.pm.log 00:02:53.315 00:25:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:53.316 00:25:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:53.316 00:25:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:53.316 00:25:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.316 00:25:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:53.316 Wed Nov 27 12:25:30 AM UTC 2024 00:02:53.316 00:25:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:53.316 v25.01-pre-271-g2f2acf4eb 00:02:53.316 00:25:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:53.316 00:25:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:53.316 00:25:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:53.316 00:25:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.316 00:25:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.316 ************************************ 00:02:53.316 START TEST asan 00:02:53.316 ************************************ 00:02:53.316 using asan 00:02:53.316 00:25:30 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:53.316 00:02:53.316 real 0m0.000s 00:02:53.316 user 0m0.000s 00:02:53.316 sys 0m0.000s 00:02:53.316 00:25:30 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.316 00:25:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:53.316 ************************************ 00:02:53.316 END TEST asan 00:02:53.316 ************************************ 00:02:53.577 00:25:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:53.577 00:25:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:53.577 00:25:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:53.577 00:25:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.577 00:25:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.577 ************************************ 00:02:53.577 START TEST ubsan 00:02:53.577 ************************************ 00:02:53.577 using ubsan 00:02:53.577 00:25:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:53.577 00:02:53.577 real 0m0.000s 00:02:53.577 user 0m0.000s 00:02:53.577 sys 0m0.000s 00:02:53.577 ************************************ 00:02:53.577 END TEST ubsan 00:02:53.577 ************************************ 00:02:53.577 00:25:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.577 00:25:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:53.577 00:25:30 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:53.577 00:25:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:53.577 00:25:30 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:53.577 00:25:30 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:53.577 00:25:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.577 00:25:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.577 ************************************ 00:02:53.577 START TEST build_native_dpdk 00:02:53.577 ************************************ 00:02:53.577 00:25:30 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:53.577 eeb0605f11 version: 23.11.0 00:02:53.577 238778122a doc: update release notes for 23.11 00:02:53.577 46aa6b3cfc doc: fix description of RSS features 00:02:53.577 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:53.577 7e421ae345 devtools: support skipping forbid rule check 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:53.577 00:25:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:53.578 patching file config/rte_config.h 00:02:53.578 Hunk #1 succeeded at 60 (offset 1 line). 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:53.578 patching file lib/pcapng/rte_pcapng.c 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:53.578 00:25:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:53.578 00:25:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:58.868 The Meson build system 00:02:58.868 Version: 1.5.0 00:02:58.868 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:58.868 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:58.868 Build type: native build 00:02:58.869 Program cat found: YES (/usr/bin/cat) 00:02:58.869 Project name: DPDK 00:02:58.869 Project version: 23.11.0 00:02:58.869 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:58.869 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:58.869 Host machine cpu family: x86_64 00:02:58.869 Host machine cpu: x86_64 00:02:58.869 Message: ## Building in Developer Mode ## 00:02:58.869 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:58.869 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:58.869 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:58.869 Program python3 found: YES (/usr/bin/python3) 00:02:58.869 Program cat found: YES (/usr/bin/cat) 00:02:58.869 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:58.869 Compiler for C supports arguments -march=native: YES 00:02:58.869 Checking for size of "void *" : 8 00:02:58.869 Checking for size of "void *" : 8 (cached) 00:02:58.869 Library m found: YES 00:02:58.869 Library numa found: YES 00:02:58.869 Has header "numaif.h" : YES 00:02:58.869 Library fdt found: NO 00:02:58.869 Library execinfo found: NO 00:02:58.869 Has header "execinfo.h" : YES 00:02:58.869 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:58.869 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:58.869 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:58.869 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:58.869 Run-time dependency openssl found: YES 3.1.1 00:02:58.869 Run-time dependency libpcap found: YES 1.10.4 00:02:58.869 Has header "pcap.h" with dependency libpcap: YES 00:02:58.869 Compiler for C supports arguments -Wcast-qual: YES 00:02:58.869 Compiler for C supports arguments -Wdeprecated: YES 00:02:58.869 Compiler for C supports arguments -Wformat: YES 00:02:58.869 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:58.869 Compiler for C supports arguments -Wformat-security: NO 00:02:58.869 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:58.869 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:58.869 Compiler for C supports arguments -Wnested-externs: YES 00:02:58.869 Compiler for C supports arguments -Wold-style-definition: YES 00:02:58.869 Compiler for C supports arguments -Wpointer-arith: YES 00:02:58.869 Compiler for C supports arguments -Wsign-compare: YES 00:02:58.869 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:58.869 Compiler for C supports arguments -Wundef: YES 00:02:58.869 Compiler for C supports arguments -Wwrite-strings: YES 00:02:58.869 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:58.869 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:58.869 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:58.869 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:58.869 Program objdump found: YES (/usr/bin/objdump) 00:02:58.869 Compiler for C supports arguments -mavx512f: YES 00:02:58.869 Checking if "AVX512 checking" compiles: YES 00:02:58.869 Fetching value of define "__SSE4_2__" : 1 00:02:58.869 Fetching value of define "__AES__" : 1 00:02:58.869 Fetching value of define "__AVX__" : 1 00:02:58.869 Fetching value of define "__AVX2__" : 1 00:02:58.869 Fetching value of define "__AVX512BW__" : 1 00:02:58.869 Fetching value of define "__AVX512CD__" : 1 00:02:58.869 Fetching value of define "__AVX512DQ__" : 1 00:02:58.869 Fetching value of define "__AVX512F__" : 1 00:02:58.869 Fetching value of define "__AVX512VL__" : 1 00:02:58.869 Fetching value of define "__PCLMUL__" : 1 00:02:58.869 Fetching value of define "__RDRND__" : 1 00:02:58.869 Fetching value of define "__RDSEED__" : 1 00:02:58.869 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:58.869 Fetching value of define "__znver1__" : (undefined) 00:02:58.869 Fetching value of define "__znver2__" : (undefined) 00:02:58.869 Fetching value of define "__znver3__" : (undefined) 00:02:58.869 Fetching value of define "__znver4__" : (undefined) 00:02:58.869 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:58.869 Message: lib/log: Defining dependency "log" 00:02:58.869 Message: lib/kvargs: Defining dependency "kvargs" 00:02:58.869 Message: lib/telemetry: Defining dependency "telemetry" 00:02:58.869 Checking for function "getentropy" : NO 00:02:58.869 Message: lib/eal: Defining dependency "eal" 00:02:58.869 Message: lib/ring: Defining dependency "ring" 00:02:58.869 Message: lib/rcu: Defining dependency "rcu" 00:02:58.869 Message: lib/mempool: Defining dependency "mempool" 00:02:58.869 Message: lib/mbuf: Defining dependency "mbuf" 00:02:58.869 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:58.869 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:58.869 Compiler for C supports arguments -mpclmul: YES 00:02:58.869 Compiler for C supports arguments -maes: YES 00:02:58.869 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:58.869 Compiler for C supports arguments -mavx512bw: YES 00:02:58.869 Compiler for C supports arguments -mavx512dq: YES 00:02:58.869 Compiler for C supports arguments -mavx512vl: YES 00:02:58.869 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:58.869 Compiler for C supports arguments -mavx2: YES 00:02:58.869 Compiler for C supports arguments -mavx: YES 00:02:58.869 Message: lib/net: Defining dependency "net" 00:02:58.869 Message: lib/meter: Defining dependency "meter" 00:02:58.869 Message: lib/ethdev: Defining dependency "ethdev" 00:02:58.869 Message: lib/pci: Defining dependency "pci" 00:02:58.869 Message: lib/cmdline: Defining dependency "cmdline" 00:02:58.869 Message: lib/metrics: Defining dependency "metrics" 00:02:58.869 Message: lib/hash: Defining dependency "hash" 00:02:58.869 Message: lib/timer: Defining dependency "timer" 00:02:58.869 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:58.869 Message: lib/acl: Defining dependency "acl" 00:02:58.869 Message: lib/bbdev: Defining dependency "bbdev" 00:02:58.869 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:58.869 Run-time dependency libelf found: YES 0.191 00:02:58.869 Message: lib/bpf: Defining dependency "bpf" 00:02:58.869 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:58.869 Message: lib/compressdev: Defining dependency "compressdev" 00:02:58.869 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:58.869 Message: lib/distributor: Defining dependency "distributor" 00:02:58.869 Message: lib/dmadev: Defining dependency "dmadev" 00:02:58.869 Message: lib/efd: Defining dependency "efd" 00:02:58.869 Message: lib/eventdev: Defining dependency "eventdev" 00:02:58.869 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:58.869 Message: lib/gpudev: Defining dependency "gpudev" 00:02:58.869 Message: lib/gro: Defining dependency "gro" 00:02:58.869 Message: lib/gso: Defining dependency "gso" 00:02:58.869 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:58.869 Message: lib/jobstats: Defining dependency "jobstats" 00:02:58.869 Message: lib/latencystats: Defining dependency "latencystats" 00:02:58.869 Message: lib/lpm: Defining dependency "lpm" 00:02:58.869 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512IFMA__" : 1 00:02:58.869 Message: lib/member: Defining dependency "member" 00:02:58.869 Message: lib/pcapng: Defining dependency "pcapng" 00:02:58.869 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:58.869 Message: lib/power: Defining dependency "power" 00:02:58.869 Message: lib/rawdev: Defining dependency "rawdev" 00:02:58.869 Message: lib/regexdev: Defining dependency "regexdev" 00:02:58.869 Message: lib/mldev: Defining dependency "mldev" 00:02:58.869 Message: lib/rib: Defining dependency "rib" 00:02:58.869 Message: lib/reorder: Defining dependency "reorder" 00:02:58.869 Message: lib/sched: Defining dependency "sched" 00:02:58.869 Message: lib/security: Defining dependency "security" 00:02:58.869 Message: lib/stack: Defining dependency "stack" 00:02:58.869 Has header "linux/userfaultfd.h" : YES 00:02:58.869 Has header "linux/vduse.h" : YES 00:02:58.869 Message: lib/vhost: Defining dependency "vhost" 00:02:58.869 Message: lib/ipsec: Defining dependency "ipsec" 00:02:58.869 Message: lib/pdcp: Defining dependency "pdcp" 00:02:58.869 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:58.869 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:58.869 Message: lib/fib: Defining dependency "fib" 00:02:58.869 Message: lib/port: Defining dependency "port" 00:02:58.869 Message: lib/pdump: Defining dependency "pdump" 00:02:58.869 Message: lib/table: Defining dependency "table" 00:02:58.869 Message: lib/pipeline: Defining dependency "pipeline" 00:02:58.869 Message: lib/graph: Defining dependency "graph" 00:02:58.869 Message: lib/node: Defining dependency "node" 00:02:58.869 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:58.869 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:58.869 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:58.869 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.812 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.812 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.812 Compiler for C supports arguments -Wno-format: YES 00:02:59.812 Compiler for C supports arguments -Wno-format-security: YES 00:02:59.812 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:59.812 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.812 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.812 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:59.812 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.812 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.812 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.812 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:59.812 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:59.812 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:59.812 Has header "sys/epoll.h" : YES 00:02:59.812 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.812 Configuring doxy-api-html.conf using configuration 00:02:59.812 Configuring doxy-api-man.conf using configuration 00:02:59.812 Program mandb found: YES (/usr/bin/mandb) 00:02:59.812 Program sphinx-build found: NO 00:02:59.812 Configuring rte_build_config.h using configuration 00:02:59.812 Message: 00:02:59.812 ================= 00:02:59.812 Applications Enabled 00:02:59.812 ================= 00:02:59.812 00:02:59.812 apps: 00:02:59.812 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:59.812 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:59.812 test-pmd, test-regex, test-sad, test-security-perf, 00:02:59.812 00:02:59.812 Message: 00:02:59.812 ================= 00:02:59.812 Libraries Enabled 00:02:59.812 ================= 00:02:59.812 00:02:59.812 libs: 00:02:59.812 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.812 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:59.812 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:59.812 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:59.812 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:59.812 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:59.812 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:59.812 00:02:59.812 00:02:59.812 Message: 00:02:59.812 =============== 00:02:59.812 Drivers Enabled 00:02:59.812 =============== 00:02:59.812 00:02:59.812 common: 00:02:59.812 00:02:59.812 bus: 00:02:59.812 pci, vdev, 00:02:59.812 mempool: 00:02:59.812 ring, 00:02:59.812 dma: 00:02:59.812 00:02:59.812 net: 00:02:59.812 i40e, 00:02:59.812 raw: 00:02:59.812 00:02:59.812 crypto: 00:02:59.812 00:02:59.812 compress: 00:02:59.812 00:02:59.812 regex: 00:02:59.812 00:02:59.812 ml: 00:02:59.812 00:02:59.812 vdpa: 00:02:59.812 00:02:59.812 event: 00:02:59.812 00:02:59.812 baseband: 00:02:59.812 00:02:59.812 gpu: 00:02:59.812 00:02:59.812 00:02:59.812 Message: 00:02:59.812 ================= 00:02:59.812 Content Skipped 00:02:59.812 ================= 00:02:59.812 00:02:59.812 apps: 00:02:59.812 00:02:59.812 libs: 00:02:59.812 00:02:59.812 drivers: 00:02:59.812 common/cpt: not in enabled drivers build config 00:02:59.812 common/dpaax: not in enabled drivers build config 00:02:59.812 common/iavf: not in enabled drivers build config 00:02:59.812 common/idpf: not in enabled drivers build config 00:02:59.812 common/mvep: not in enabled drivers build config 00:02:59.812 common/octeontx: not in enabled drivers build config 00:02:59.812 bus/auxiliary: not in enabled drivers build config 00:02:59.812 bus/cdx: not in enabled drivers build config 00:02:59.812 bus/dpaa: not in enabled drivers build config 00:02:59.812 bus/fslmc: not in enabled drivers build config 00:02:59.812 bus/ifpga: not in enabled drivers build config 00:02:59.812 bus/platform: not in enabled drivers build config 00:02:59.812 bus/vmbus: not in enabled drivers build config 00:02:59.812 common/cnxk: not in enabled drivers build config 00:02:59.812 common/mlx5: not in enabled drivers build config 00:02:59.812 common/nfp: not in enabled drivers build config 00:02:59.812 common/qat: not in enabled drivers build config 00:02:59.812 common/sfc_efx: not in enabled drivers build config 00:02:59.812 mempool/bucket: not in enabled drivers build config 00:02:59.812 mempool/cnxk: not in enabled drivers build config 00:02:59.812 mempool/dpaa: not in enabled drivers build config 00:02:59.812 mempool/dpaa2: not in enabled drivers build config 00:02:59.812 mempool/octeontx: not in enabled drivers build config 00:02:59.812 mempool/stack: not in enabled drivers build config 00:02:59.812 dma/cnxk: not in enabled drivers build config 00:02:59.812 dma/dpaa: not in enabled drivers build config 00:02:59.812 dma/dpaa2: not in enabled drivers build config 00:02:59.812 dma/hisilicon: not in enabled drivers build config 00:02:59.812 dma/idxd: not in enabled drivers build config 00:02:59.812 dma/ioat: not in enabled drivers build config 00:02:59.812 dma/skeleton: not in enabled drivers build config 00:02:59.812 net/af_packet: not in enabled drivers build config 00:02:59.812 net/af_xdp: not in enabled drivers build config 00:02:59.812 net/ark: not in enabled drivers build config 00:02:59.812 net/atlantic: not in enabled drivers build config 00:02:59.812 net/avp: not in enabled drivers build config 00:02:59.812 net/axgbe: not in enabled drivers build config 00:02:59.812 net/bnx2x: not in enabled drivers build config 00:02:59.812 net/bnxt: not in enabled drivers build config 00:02:59.812 net/bonding: not in enabled drivers build config 00:02:59.813 net/cnxk: not in enabled drivers build config 00:02:59.813 net/cpfl: not in enabled drivers build config 00:02:59.813 net/cxgbe: not in enabled drivers build config 00:02:59.813 net/dpaa: not in enabled drivers build config 00:02:59.813 net/dpaa2: not in enabled drivers build config 00:02:59.813 net/e1000: not in enabled drivers build config 00:02:59.813 net/ena: not in enabled drivers build config 00:02:59.813 net/enetc: not in enabled drivers build config 00:02:59.813 net/enetfec: not in enabled drivers build config 00:02:59.813 net/enic: not in enabled drivers build config 00:02:59.813 net/failsafe: not in enabled drivers build config 00:02:59.813 net/fm10k: not in enabled drivers build config 00:02:59.813 net/gve: not in enabled drivers build config 00:02:59.813 net/hinic: not in enabled drivers build config 00:02:59.813 net/hns3: not in enabled drivers build config 00:02:59.813 net/iavf: not in enabled drivers build config 00:02:59.813 net/ice: not in enabled drivers build config 00:02:59.813 net/idpf: not in enabled drivers build config 00:02:59.813 net/igc: not in enabled drivers build config 00:02:59.813 net/ionic: not in enabled drivers build config 00:02:59.813 net/ipn3ke: not in enabled drivers build config 00:02:59.813 net/ixgbe: not in enabled drivers build config 00:02:59.813 net/mana: not in enabled drivers build config 00:02:59.813 net/memif: not in enabled drivers build config 00:02:59.813 net/mlx4: not in enabled drivers build config 00:02:59.813 net/mlx5: not in enabled drivers build config 00:02:59.813 net/mvneta: not in enabled drivers build config 00:02:59.813 net/mvpp2: not in enabled drivers build config 00:02:59.813 net/netvsc: not in enabled drivers build config 00:02:59.813 net/nfb: not in enabled drivers build config 00:02:59.813 net/nfp: not in enabled drivers build config 00:02:59.813 net/ngbe: not in enabled drivers build config 00:02:59.813 net/null: not in enabled drivers build config 00:02:59.813 net/octeontx: not in enabled drivers build config 00:02:59.813 net/octeon_ep: not in enabled drivers build config 00:02:59.813 net/pcap: not in enabled drivers build config 00:02:59.813 net/pfe: not in enabled drivers build config 00:02:59.813 net/qede: not in enabled drivers build config 00:02:59.813 net/ring: not in enabled drivers build config 00:02:59.813 net/sfc: not in enabled drivers build config 00:02:59.813 net/softnic: not in enabled drivers build config 00:02:59.813 net/tap: not in enabled drivers build config 00:02:59.813 net/thunderx: not in enabled drivers build config 00:02:59.813 net/txgbe: not in enabled drivers build config 00:02:59.813 net/vdev_netvsc: not in enabled drivers build config 00:02:59.813 net/vhost: not in enabled drivers build config 00:02:59.813 net/virtio: not in enabled drivers build config 00:02:59.813 net/vmxnet3: not in enabled drivers build config 00:02:59.813 raw/cnxk_bphy: not in enabled drivers build config 00:02:59.813 raw/cnxk_gpio: not in enabled drivers build config 00:02:59.813 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:59.813 raw/ifpga: not in enabled drivers build config 00:02:59.813 raw/ntb: not in enabled drivers build config 00:02:59.813 raw/skeleton: not in enabled drivers build config 00:02:59.813 crypto/armv8: not in enabled drivers build config 00:02:59.813 crypto/bcmfs: not in enabled drivers build config 00:02:59.813 crypto/caam_jr: not in enabled drivers build config 00:02:59.813 crypto/ccp: not in enabled drivers build config 00:02:59.813 crypto/cnxk: not in enabled drivers build config 00:02:59.813 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.813 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.813 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.813 crypto/mlx5: not in enabled drivers build config 00:02:59.813 crypto/mvsam: not in enabled drivers build config 00:02:59.813 crypto/nitrox: not in enabled drivers build config 00:02:59.813 crypto/null: not in enabled drivers build config 00:02:59.813 crypto/octeontx: not in enabled drivers build config 00:02:59.813 crypto/openssl: not in enabled drivers build config 00:02:59.813 crypto/scheduler: not in enabled drivers build config 00:02:59.813 crypto/uadk: not in enabled drivers build config 00:02:59.813 crypto/virtio: not in enabled drivers build config 00:02:59.813 compress/isal: not in enabled drivers build config 00:02:59.813 compress/mlx5: not in enabled drivers build config 00:02:59.813 compress/octeontx: not in enabled drivers build config 00:02:59.813 compress/zlib: not in enabled drivers build config 00:02:59.813 regex/mlx5: not in enabled drivers build config 00:02:59.813 regex/cn9k: not in enabled drivers build config 00:02:59.813 ml/cnxk: not in enabled drivers build config 00:02:59.813 vdpa/ifc: not in enabled drivers build config 00:02:59.813 vdpa/mlx5: not in enabled drivers build config 00:02:59.813 vdpa/nfp: not in enabled drivers build config 00:02:59.813 vdpa/sfc: not in enabled drivers build config 00:02:59.813 event/cnxk: not in enabled drivers build config 00:02:59.813 event/dlb2: not in enabled drivers build config 00:02:59.813 event/dpaa: not in enabled drivers build config 00:02:59.813 event/dpaa2: not in enabled drivers build config 00:02:59.813 event/dsw: not in enabled drivers build config 00:02:59.813 event/opdl: not in enabled drivers build config 00:02:59.813 event/skeleton: not in enabled drivers build config 00:02:59.813 event/sw: not in enabled drivers build config 00:02:59.813 event/octeontx: not in enabled drivers build config 00:02:59.813 baseband/acc: not in enabled drivers build config 00:02:59.813 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:59.813 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:59.813 baseband/la12xx: not in enabled drivers build config 00:02:59.813 baseband/null: not in enabled drivers build config 00:02:59.813 baseband/turbo_sw: not in enabled drivers build config 00:02:59.813 gpu/cuda: not in enabled drivers build config 00:02:59.813 00:02:59.813 00:02:59.813 Build targets in project: 215 00:02:59.813 00:02:59.813 DPDK 23.11.0 00:02:59.813 00:02:59.813 User defined options 00:02:59.813 libdir : lib 00:02:59.813 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:59.813 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:59.813 c_link_args : 00:02:59.813 enable_docs : false 00:02:59.813 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:59.813 enable_kmods : false 00:02:59.813 machine : native 00:02:59.813 tests : false 00:02:59.813 00:02:59.813 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.813 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:00.075 00:25:36 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:00.075 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.075 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:00.075 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.075 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:00.075 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:00.075 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:00.075 [6/705] Linking static target lib/librte_kvargs.a 00:03:00.075 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:00.336 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:00.336 [9/705] Linking static target lib/librte_log.a 00:03:00.336 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.336 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.336 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.336 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.597 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.597 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.597 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.597 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.597 [18/705] Linking target lib/librte_log.so.24.0 00:03:00.597 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.597 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.858 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.858 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.858 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.858 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.858 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.858 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.119 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.119 [28/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:01.119 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:01.119 [30/705] Linking static target lib/librte_telemetry.a 00:03:01.119 [31/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:01.119 [32/705] Linking target lib/librte_kvargs.so.24.0 00:03:01.119 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:01.119 [34/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:01.119 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:01.119 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.381 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.381 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.381 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.381 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:01.381 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.381 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.381 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:01.381 [44/705] Linking target lib/librte_telemetry.so.24.0 00:03:01.642 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:01.642 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:01.642 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:01.642 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:01.642 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:01.903 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:01.903 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.903 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:01.903 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:01.903 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:01.903 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:01.903 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.164 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:02.164 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.164 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:02.164 [60/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.164 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.164 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:02.164 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:02.164 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:02.164 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:02.164 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.164 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:02.458 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:02.458 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.458 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:02.458 [71/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.458 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.458 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.458 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.458 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.458 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:02.719 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:02.719 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.719 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.981 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:02.981 [81/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:02.981 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:02.981 [83/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:02.981 [84/705] Linking static target lib/librte_ring.a 00:03:02.981 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.243 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:03.243 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:03.243 [88/705] Linking static target lib/librte_eal.a 00:03:03.243 [89/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.243 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:03.243 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:03.243 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:03.243 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:03.504 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:03.504 [95/705] Linking static target lib/librte_mempool.a 00:03:03.504 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:03.504 [97/705] Linking static target lib/librte_rcu.a 00:03:03.504 [98/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:03.504 [99/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:03.504 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:03.766 [101/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:03.766 [102/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.766 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.766 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.766 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.766 [106/705] Linking static target lib/librte_meter.a 00:03:04.028 [107/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.028 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:04.028 [109/705] Linking static target lib/librte_net.a 00:03:04.028 [110/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.028 [111/705] Linking static target lib/librte_mbuf.a 00:03:04.028 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:04.028 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:04.028 [114/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.028 [115/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.028 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.290 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:04.290 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.290 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.551 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:04.551 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:04.812 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.812 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.812 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:04.812 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.812 [126/705] Linking static target lib/librte_pci.a 00:03:05.100 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:05.100 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:05.100 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:05.100 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:05.100 [131/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:05.100 [132/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.100 [133/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:05.100 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:05.100 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:05.100 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:05.100 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:05.100 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:05.100 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:05.100 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:05.380 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:05.380 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:05.380 [143/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.380 [144/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:05.380 [145/705] Linking static target lib/librte_cmdline.a 00:03:05.380 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:05.641 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:05.641 [148/705] Linking static target lib/librte_metrics.a 00:03:05.641 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:05.903 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.903 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.903 [152/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:05.903 [153/705] Linking static target lib/librte_timer.a 00:03:05.903 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.164 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:06.164 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.164 [157/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:06.164 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:06.424 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:06.424 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:06.424 [161/705] Linking static target lib/librte_bitratestats.a 00:03:06.686 [162/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.686 [163/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:06.946 [164/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:06.946 [165/705] Linking static target lib/acl/libavx2_tmp.a 00:03:06.946 [166/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:06.946 [167/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:06.946 [168/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:06.946 [169/705] Linking static target lib/librte_bbdev.a 00:03:06.946 [170/705] Linking static target lib/librte_ethdev.a 00:03:06.946 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:07.208 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:07.208 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:07.208 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:07.467 [175/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.467 [176/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:07.467 [177/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:07.467 [178/705] Linking static target lib/librte_hash.a 00:03:07.467 [179/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.467 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:07.726 [181/705] Linking target lib/librte_eal.so.24.0 00:03:07.726 [182/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:07.726 [183/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:07.726 [184/705] Linking static target lib/librte_cfgfile.a 00:03:07.726 [185/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:07.726 [186/705] Linking target lib/librte_ring.so.24.0 00:03:07.726 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:07.726 [188/705] Linking target lib/librte_meter.so.24.0 00:03:07.726 [189/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:07.984 [190/705] Linking target lib/librte_rcu.so.24.0 00:03:07.984 [191/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.984 [192/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:07.984 [193/705] Linking target lib/librte_mempool.so.24.0 00:03:07.984 [194/705] Linking target lib/librte_pci.so.24.0 00:03:07.984 [195/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:07.984 [196/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.984 [197/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:07.984 [198/705] Linking target lib/librte_timer.so.24.0 00:03:07.984 [199/705] Linking target lib/librte_cfgfile.so.24.0 00:03:07.984 [200/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:07.984 [201/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:07.984 [202/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:07.984 [203/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:07.984 [204/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:07.984 [205/705] Linking target lib/librte_mbuf.so.24.0 00:03:08.242 [206/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:08.242 [207/705] Linking static target lib/librte_bpf.a 00:03:08.242 [208/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:08.243 [209/705] Linking target lib/librte_net.so.24.0 00:03:08.243 [210/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:08.243 [211/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:08.243 [212/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.243 [213/705] Linking target lib/librte_cmdline.so.24.0 00:03:08.501 [214/705] Linking target lib/librte_hash.so.24.0 00:03:08.501 [215/705] Linking target lib/librte_bbdev.so.24.0 00:03:08.501 [216/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:08.501 [217/705] Linking static target lib/librte_compressdev.a 00:03:08.501 [218/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:08.501 [219/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:08.501 [220/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:08.501 [221/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:08.501 [222/705] Linking static target lib/librte_acl.a 00:03:08.501 [223/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:08.761 [224/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:08.761 [225/705] Linking static target lib/librte_distributor.a 00:03:08.761 [226/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.761 [227/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.761 [228/705] Linking target lib/librte_acl.so.24.0 00:03:08.761 [229/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.761 [230/705] Linking target lib/librte_compressdev.so.24.0 00:03:08.761 [231/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:09.020 [232/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:09.020 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.020 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:09.020 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.020 [236/705] Linking static target lib/librte_dmadev.a 00:03:09.279 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:09.279 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:09.279 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.279 [240/705] Linking target lib/librte_dmadev.so.24.0 00:03:09.538 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:09.538 [242/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:09.538 [243/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:09.538 [244/705] Linking static target lib/librte_efd.a 00:03:09.796 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:09.796 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.796 [247/705] Linking target lib/librte_efd.so.24.0 00:03:09.796 [248/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:10.054 [249/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:10.054 [250/705] Linking static target lib/librte_dispatcher.a 00:03:10.055 [251/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:10.055 [252/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:10.055 [253/705] Linking static target lib/librte_cryptodev.a 00:03:10.055 [254/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:10.313 [255/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:10.313 [256/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:10.313 [257/705] Linking static target lib/librte_gpudev.a 00:03:10.313 [258/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:10.313 [259/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.313 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:10.572 [261/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.572 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:10.572 [263/705] Linking target lib/librte_ethdev.so.24.0 00:03:10.572 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:10.572 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:10.572 [266/705] Linking static target lib/librte_gro.a 00:03:10.572 [267/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:10.572 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:10.572 [269/705] Linking target lib/librte_metrics.so.24.0 00:03:10.572 [270/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:10.572 [271/705] Linking target lib/librte_bpf.so.24.0 00:03:10.830 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:10.830 [273/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:10.830 [274/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:10.830 [275/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.830 [276/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.830 [277/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:10.830 [278/705] Linking target lib/librte_bitratestats.so.24.0 00:03:10.830 [279/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:10.830 [280/705] Linking target lib/librte_gpudev.so.24.0 00:03:10.831 [281/705] Linking target lib/librte_gro.so.24.0 00:03:10.831 [282/705] Linking static target lib/librte_eventdev.a 00:03:10.831 [283/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:11.089 [284/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.089 [285/705] Linking target lib/librte_cryptodev.so.24.0 00:03:11.089 [286/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:11.089 [287/705] Linking static target lib/librte_gso.a 00:03:11.089 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:11.089 [289/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:11.089 [290/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:11.089 [291/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.089 [292/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:11.347 [293/705] Linking static target lib/librte_jobstats.a 00:03:11.347 [294/705] Linking target lib/librte_gso.so.24.0 00:03:11.347 [295/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:11.347 [296/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:11.347 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:11.347 [298/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.347 [299/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:11.347 [300/705] Linking static target lib/librte_ip_frag.a 00:03:11.347 [301/705] Linking target lib/librte_jobstats.so.24.0 00:03:11.606 [302/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:11.606 [303/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:11.606 [304/705] Linking static target lib/librte_latencystats.a 00:03:11.606 [305/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:11.606 [306/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.606 [307/705] Linking target lib/librte_ip_frag.so.24.0 00:03:11.606 [308/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:11.865 [309/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.865 [310/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:11.865 [311/705] Linking target lib/librte_latencystats.so.24.0 00:03:11.865 [312/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:11.865 [313/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:11.865 [314/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:11.865 [315/705] Linking static target lib/librte_lpm.a 00:03:11.865 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:12.124 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:12.124 [318/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.124 [319/705] Linking target lib/librte_lpm.so.24.0 00:03:12.124 [320/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:12.124 [321/705] Linking static target lib/librte_pcapng.a 00:03:12.124 [322/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:12.124 [323/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:12.124 [324/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:12.124 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:12.124 [326/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:12.383 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.383 [328/705] Linking target lib/librte_pcapng.so.24.0 00:03:12.383 [329/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:12.383 [330/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:12.383 [331/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.383 [332/705] Linking target lib/librte_eventdev.so.24.0 00:03:12.383 [333/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:12.383 [334/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:12.640 [335/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:12.640 [336/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:12.640 [337/705] Linking target lib/librte_dispatcher.so.24.0 00:03:12.640 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:12.640 [339/705] Linking static target lib/librte_power.a 00:03:12.640 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:12.640 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:12.640 [342/705] Linking static target lib/librte_regexdev.a 00:03:12.640 [343/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:12.640 [344/705] Linking static target lib/librte_rawdev.a 00:03:12.640 [345/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:12.640 [346/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:12.898 [347/705] Linking static target lib/librte_member.a 00:03:12.898 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:12.899 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:12.899 [350/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:12.899 [351/705] Linking static target lib/librte_mldev.a 00:03:12.899 [352/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:12.899 [353/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.899 [354/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.157 [355/705] Linking target lib/librte_rawdev.so.24.0 00:03:13.157 [356/705] Linking target lib/librte_member.so.24.0 00:03:13.157 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.157 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:13.157 [359/705] Linking target lib/librte_power.so.24.0 00:03:13.157 [360/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:13.157 [361/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.157 [362/705] Linking target lib/librte_regexdev.so.24.0 00:03:13.157 [363/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:13.157 [364/705] Linking static target lib/librte_reorder.a 00:03:13.157 [365/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:13.157 [366/705] Linking static target lib/librte_rib.a 00:03:13.414 [367/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:13.414 [368/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:13.414 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:13.414 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:13.414 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:13.414 [372/705] Linking static target lib/librte_stack.a 00:03:13.414 [373/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.414 [374/705] Linking target lib/librte_reorder.so.24.0 00:03:13.414 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:13.673 [376/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.673 [377/705] Linking static target lib/librte_security.a 00:03:13.673 [378/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:13.673 [379/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.673 [380/705] Linking target lib/librte_stack.so.24.0 00:03:13.673 [381/705] Linking target lib/librte_rib.so.24.0 00:03:13.673 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:13.673 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:13.673 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:13.932 [385/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.932 [386/705] Linking target lib/librte_security.so.24.0 00:03:13.932 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:13.932 [388/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.932 [389/705] Linking target lib/librte_mldev.so.24.0 00:03:13.932 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:13.933 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:13.933 [392/705] Linking static target lib/librte_sched.a 00:03:14.191 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:14.191 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:14.191 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.191 [396/705] Linking target lib/librte_sched.so.24.0 00:03:14.449 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:14.449 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:14.449 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:14.449 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:14.449 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:14.708 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:14.708 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:14.708 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:14.708 [405/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:14.967 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:14.967 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:14.967 [408/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:14.967 [409/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:14.967 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:14.967 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:15.225 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:15.225 [413/705] Linking static target lib/librte_ipsec.a 00:03:15.225 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.484 [415/705] Linking target lib/librte_ipsec.so.24.0 00:03:15.484 [416/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:15.484 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:15.484 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:15.484 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:15.484 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:15.752 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:15.752 [422/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:15.752 [423/705] Linking static target lib/librte_fib.a 00:03:15.752 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:15.752 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:15.752 [426/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:15.752 [427/705] Linking static target lib/librte_pdcp.a 00:03:16.062 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:16.062 [429/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.062 [430/705] Linking target lib/librte_fib.so.24.0 00:03:16.062 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.062 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:16.342 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:16.342 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:16.342 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:16.342 [436/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:16.342 [437/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:16.342 [438/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:16.599 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:16.599 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:16.599 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:16.599 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:16.857 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:16.857 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:16.857 [445/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:16.857 [446/705] Linking static target lib/librte_pdump.a 00:03:16.857 [447/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:17.114 [448/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:17.114 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:17.114 [450/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.114 [451/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:17.114 [452/705] Linking target lib/librte_pdump.so.24.0 00:03:17.114 [453/705] Linking static target lib/librte_port.a 00:03:17.371 [454/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:17.371 [455/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:17.371 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:17.371 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:17.371 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:17.629 [459/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.629 [460/705] Linking target lib/librte_port.so.24.0 00:03:17.629 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:17.629 [462/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:17.629 [463/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.629 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:17.629 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:17.629 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:17.629 [467/705] Linking static target lib/librte_table.a 00:03:17.887 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:18.144 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:18.144 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.144 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:18.144 [472/705] Linking target lib/librte_table.so.24.0 00:03:18.144 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:18.144 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:18.402 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:18.402 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:18.659 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:18.659 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:18.659 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:18.659 [480/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:18.659 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:18.917 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:18.918 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:18.918 [484/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:18.918 [485/705] Linking static target lib/librte_graph.a 00:03:18.918 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:18.918 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:18.918 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:19.176 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:19.434 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.434 [491/705] Linking target lib/librte_graph.so.24.0 00:03:19.434 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:19.434 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:19.434 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:19.434 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:19.713 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:19.713 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:19.713 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:19.713 [499/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:19.713 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.713 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:19.713 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:19.971 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:19.971 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.972 [505/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.972 [506/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.972 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:19.972 [508/705] Linking static target lib/librte_node.a 00:03:19.972 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.229 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.229 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.229 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.229 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.229 [514/705] Linking target lib/librte_node.so.24.0 00:03:20.488 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.488 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.488 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.488 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.488 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:20.488 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.488 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.488 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.488 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:20.488 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:20.488 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.488 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:20.747 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:20.747 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.747 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:20.747 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.747 [531/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.747 [532/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.747 [533/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:21.006 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:21.006 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:21.006 [536/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.006 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.006 [538/705] Linking static target drivers/librte_mempool_ring.a 00:03:21.006 [539/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:21.006 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.006 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:21.006 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:21.266 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:21.524 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:21.524 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:21.782 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:22.041 [547/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:22.041 [548/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:22.301 [549/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:22.301 [550/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:22.301 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:22.301 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:22.301 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:22.558 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:22.558 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:22.558 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:22.817 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:22.817 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:23.075 [559/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:23.075 [560/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:23.075 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:23.335 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:23.335 [563/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:23.335 [564/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:23.335 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:23.593 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:23.593 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:23.593 [568/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:23.593 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:23.593 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:23.593 [571/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:23.851 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:23.851 [573/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:23.851 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:23.851 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:23.851 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:24.111 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:24.111 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:24.111 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:24.368 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:24.368 [581/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:24.368 [582/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:24.368 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:24.368 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.638 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:24.638 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:24.638 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:24.638 [588/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:24.953 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:24.953 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:24.954 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:24.954 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:24.954 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:25.212 [594/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:25.212 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:25.212 [596/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.212 [597/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:25.470 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:25.470 [599/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.470 [600/705] Linking static target lib/librte_vhost.a 00:03:25.470 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:25.470 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:25.470 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:25.470 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:25.727 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:25.727 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:25.727 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:25.727 [608/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:25.727 [609/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:25.985 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:25.985 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:25.985 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:25.985 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.242 [614/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.242 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:26.242 [616/705] Linking target lib/librte_vhost.so.24.0 00:03:26.242 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:26.242 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:26.807 [619/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:26.807 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:26.807 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:26.807 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.065 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.065 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.065 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.065 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:27.065 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.065 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.065 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:27.323 [630/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:27.323 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:27.323 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:27.323 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:27.323 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:27.581 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:27.581 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:27.581 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:27.838 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:27.838 [639/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:27.838 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:27.838 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:27.838 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:27.838 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:28.096 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:28.096 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:28.096 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:28.096 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:28.096 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:28.096 [649/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.354 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:28.354 [651/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:28.354 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:28.612 [653/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:28.612 [654/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:28.870 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:28.870 [656/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:29.128 [657/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:29.128 [658/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.128 [659/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:29.128 [660/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:29.128 [661/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:29.386 [662/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:29.386 [663/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:29.386 [664/705] Linking static target lib/librte_pipeline.a 00:03:29.386 [665/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:29.644 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:29.644 [667/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:29.644 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:29.644 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:29.902 [670/705] Linking target app/dpdk-dumpcap 00:03:29.902 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:29.902 [672/705] Linking target app/dpdk-graph 00:03:29.902 [673/705] Linking target app/dpdk-pdump 00:03:29.902 [674/705] Linking target app/dpdk-proc-info 00:03:29.902 [675/705] Linking target app/dpdk-test-acl 00:03:30.160 [676/705] Linking target app/dpdk-test-bbdev 00:03:30.160 [677/705] Linking target app/dpdk-test-cmdline 00:03:30.160 [678/705] Linking target app/dpdk-test-compress-perf 00:03:30.417 [679/705] Linking target app/dpdk-test-dma-perf 00:03:30.417 [680/705] Linking target app/dpdk-test-eventdev 00:03:30.417 [681/705] Linking target app/dpdk-test-crypto-perf 00:03:30.417 [682/705] Linking target app/dpdk-test-fib 00:03:30.417 [683/705] Linking target app/dpdk-test-flow-perf 00:03:30.417 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:30.417 [685/705] Linking target app/dpdk-test-gpudev 00:03:30.675 [686/705] Linking target app/dpdk-test-mldev 00:03:30.675 [687/705] Linking target app/dpdk-test-pipeline 00:03:30.675 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:30.933 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:30.933 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:30.933 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:30.933 [692/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:31.191 [693/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:31.191 [694/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:31.191 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:31.191 [696/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:31.448 [697/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.448 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:31.448 [699/705] Linking target lib/librte_pipeline.so.24.0 00:03:31.448 [700/705] Linking target app/dpdk-test-sad 00:03:31.706 [701/705] Linking target app/dpdk-test-regex 00:03:31.706 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:31.706 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:31.963 [704/705] Linking target app/dpdk-test-security-perf 00:03:31.963 [705/705] Linking target app/dpdk-testpmd 00:03:31.963 00:26:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:31.963 00:26:08 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:31.963 00:26:08 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:31.963 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:32.222 [0/1] Installing files. 00:03:32.222 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.222 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.223 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.224 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.225 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.484 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.485 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.486 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.486 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.486 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.487 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.747 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.747 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.747 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.747 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.747 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.747 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.748 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.749 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.750 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.751 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.751 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:32.751 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:32.751 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:32.751 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:32.751 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:32.751 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:32.751 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:32.751 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:32.751 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:32.751 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:32.751 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:32.751 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:32.751 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:32.751 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:32.751 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:32.751 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:32.751 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:32.751 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:32.751 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:32.751 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:32.751 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:32.751 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:32.751 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:32.751 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:32.751 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:32.751 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:32.751 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:32.751 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:32.751 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:32.751 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:32.751 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:32.751 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:32.751 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:32.751 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:32.751 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:32.751 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:32.751 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:32.751 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:32.751 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:32.751 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:32.751 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:32.751 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:32.751 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:32.751 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:32.751 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:32.751 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:32.751 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:32.751 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:32.751 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:32.751 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:32.751 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:32.751 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:32.751 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:32.751 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:32.751 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:32.751 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:32.751 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:32.751 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:32.751 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:32.751 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:32.751 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:32.751 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:32.751 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:32.751 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:32.751 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:32.751 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:32.751 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:32.751 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:32.751 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:32.751 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:32.751 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:32.751 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:32.751 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:32.751 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:32.751 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:32.751 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:32.751 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:32.751 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:32.751 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:32.751 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:32.751 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:32.751 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:32.751 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:32.751 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:32.751 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:32.751 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:32.751 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:32.751 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:32.751 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:32.751 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:32.751 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:32.751 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:32.752 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:32.752 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:32.752 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:32.752 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:32.752 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:32.752 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:32.752 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:32.752 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:32.752 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:32.752 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:32.752 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:32.752 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:32.752 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:32.752 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:32.752 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:32.752 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:32.752 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:32.752 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:32.752 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:32.752 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:32.752 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:32.752 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:32.752 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:32.752 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:32.752 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:32.752 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:32.752 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:32.752 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:32.752 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:32.752 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:32.752 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:32.752 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:32.752 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:32.752 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:32.752 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:32.752 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:32.752 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:32.752 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:32.752 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:32.752 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:32.752 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:32.752 00:26:09 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:32.752 ************************************ 00:03:32.752 END TEST build_native_dpdk 00:03:32.752 ************************************ 00:03:32.752 00:26:09 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:32.752 00:03:32.752 real 0m39.305s 00:03:32.752 user 4m29.142s 00:03:32.752 sys 0m42.142s 00:03:32.752 00:26:09 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:32.752 00:26:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:32.752 00:26:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:32.752 00:26:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:32.752 00:26:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:33.010 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:33.010 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.010 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:33.010 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:33.269 Using 'verbs' RDMA provider 00:03:44.635 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:56.922 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:56.922 Creating mk/config.mk...done. 00:03:56.922 Creating mk/cc.flags.mk...done. 00:03:56.922 Type 'make' to build. 00:03:56.922 00:26:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:56.922 00:26:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:56.922 00:26:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:56.922 00:26:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.922 ************************************ 00:03:56.922 START TEST make 00:03:56.922 ************************************ 00:03:56.922 00:26:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:56.922 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:56.922 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:56.922 meson setup builddir \ 00:03:56.922 -Dwith-libaio=enabled \ 00:03:56.922 -Dwith-liburing=enabled \ 00:03:56.922 -Dwith-libvfn=disabled \ 00:03:56.922 -Dwith-spdk=disabled \ 00:03:56.922 -Dexamples=false \ 00:03:56.922 -Dtests=false \ 00:03:56.922 -Dtools=false && \ 00:03:56.922 meson compile -C builddir && \ 00:03:56.922 cd -) 00:03:56.922 make[1]: Nothing to be done for 'all'. 00:03:58.841 The Meson build system 00:03:58.841 Version: 1.5.0 00:03:58.841 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:58.841 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.841 Build type: native build 00:03:58.841 Project name: xnvme 00:03:58.841 Project version: 0.7.5 00:03:58.841 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.841 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.841 Host machine cpu family: x86_64 00:03:58.841 Host machine cpu: x86_64 00:03:58.841 Message: host_machine.system: linux 00:03:58.841 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:58.841 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:58.841 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:58.841 Run-time dependency threads found: YES 00:03:58.841 Has header "setupapi.h" : NO 00:03:58.841 Has header "linux/blkzoned.h" : YES 00:03:58.841 Has header "linux/blkzoned.h" : YES (cached) 00:03:58.841 Has header "libaio.h" : YES 00:03:58.841 Library aio found: YES 00:03:58.841 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.841 Run-time dependency liburing found: YES 2.2 00:03:58.841 Dependency libvfn skipped: feature with-libvfn disabled 00:03:58.841 Found CMake: /usr/bin/cmake (3.27.7) 00:03:58.841 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:58.841 Subproject spdk : skipped: feature with-spdk disabled 00:03:58.841 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.841 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.841 Library rt found: YES 00:03:58.841 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:58.842 Configuring xnvme_config.h using configuration 00:03:58.842 Configuring xnvme.spec using configuration 00:03:58.842 Run-time dependency bash-completion found: YES 2.11 00:03:58.842 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:58.842 Program cp found: YES (/usr/bin/cp) 00:03:58.842 Build targets in project: 3 00:03:58.842 00:03:58.842 xnvme 0.7.5 00:03:58.842 00:03:58.842 Subprojects 00:03:58.842 spdk : NO Feature 'with-spdk' disabled 00:03:58.842 00:03:58.842 User defined options 00:03:58.842 examples : false 00:03:58.842 tests : false 00:03:58.842 tools : false 00:03:58.842 with-libaio : enabled 00:03:58.842 with-liburing: enabled 00:03:58.842 with-libvfn : disabled 00:03:58.842 with-spdk : disabled 00:03:58.842 00:03:58.842 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.102 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:59.102 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:59.102 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:59.102 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:59.102 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:59.102 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:59.102 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:59.102 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:59.102 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:59.102 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:59.102 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:59.102 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:59.102 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:59.361 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:59.361 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:59.361 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:59.361 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:59.361 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:59.361 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:59.361 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:59.361 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:59.361 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:59.361 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:59.361 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:59.361 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:59.361 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:59.361 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:59.361 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:59.361 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:59.361 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:59.361 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:59.361 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:59.361 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:59.361 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:59.361 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:59.361 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:59.361 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:59.361 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:59.361 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:59.361 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:59.361 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:59.361 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:59.361 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:59.361 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:59.361 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:59.361 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:59.361 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:59.361 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:59.361 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:59.361 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:59.361 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:59.621 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:59.621 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:59.621 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:59.621 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:59.621 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:59.621 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:59.621 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:59.621 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:59.621 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:59.621 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:59.621 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:59.621 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:59.621 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:59.621 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:59.621 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:59.621 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:59.621 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:59.621 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:59.621 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:59.622 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:59.622 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:59.880 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:59.880 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:00.138 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:00.138 [75/76] Linking static target lib/libxnvme.a 00:04:00.138 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:00.138 INFO: autodetecting backend as ninja 00:04:00.138 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:00.138 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:32.210 CC lib/ut/ut.o 00:04:32.210 CC lib/log/log.o 00:04:32.210 CC lib/log/log_flags.o 00:04:32.210 CC lib/log/log_deprecated.o 00:04:32.210 CC lib/ut_mock/mock.o 00:04:32.469 LIB libspdk_log.a 00:04:32.469 LIB libspdk_ut.a 00:04:32.469 LIB libspdk_ut_mock.a 00:04:32.469 SO libspdk_ut.so.2.0 00:04:32.469 SO libspdk_ut_mock.so.6.0 00:04:32.469 SO libspdk_log.so.7.1 00:04:32.469 SYMLINK libspdk_ut_mock.so 00:04:32.469 SYMLINK libspdk_ut.so 00:04:32.469 SYMLINK libspdk_log.so 00:04:32.727 CC lib/dma/dma.o 00:04:32.727 CXX lib/trace_parser/trace.o 00:04:32.727 CC lib/ioat/ioat.o 00:04:32.727 CC lib/util/base64.o 00:04:32.727 CC lib/util/cpuset.o 00:04:32.727 CC lib/util/bit_array.o 00:04:32.727 CC lib/util/crc16.o 00:04:32.727 CC lib/util/crc32.o 00:04:32.727 CC lib/util/crc32c.o 00:04:32.727 CC lib/vfio_user/host/vfio_user_pci.o 00:04:32.727 CC lib/util/crc32_ieee.o 00:04:32.727 CC lib/util/crc64.o 00:04:32.727 CC lib/util/dif.o 00:04:32.727 LIB libspdk_dma.a 00:04:32.727 CC lib/util/fd.o 00:04:32.727 SO libspdk_dma.so.5.0 00:04:32.727 CC lib/util/fd_group.o 00:04:33.034 CC lib/util/file.o 00:04:33.034 SYMLINK libspdk_dma.so 00:04:33.034 CC lib/util/hexlify.o 00:04:33.034 CC lib/util/iov.o 00:04:33.034 CC lib/util/math.o 00:04:33.034 LIB libspdk_ioat.a 00:04:33.034 CC lib/util/net.o 00:04:33.034 CC lib/vfio_user/host/vfio_user.o 00:04:33.034 SO libspdk_ioat.so.7.0 00:04:33.034 CC lib/util/pipe.o 00:04:33.034 SYMLINK libspdk_ioat.so 00:04:33.034 CC lib/util/strerror_tls.o 00:04:33.034 CC lib/util/string.o 00:04:33.034 CC lib/util/uuid.o 00:04:33.034 CC lib/util/xor.o 00:04:33.034 LIB libspdk_vfio_user.a 00:04:33.034 CC lib/util/zipf.o 00:04:33.034 SO libspdk_vfio_user.so.5.0 00:04:33.034 CC lib/util/md5.o 00:04:33.034 SYMLINK libspdk_vfio_user.so 00:04:33.352 LIB libspdk_util.a 00:04:33.352 SO libspdk_util.so.10.1 00:04:33.352 LIB libspdk_trace_parser.a 00:04:33.352 SO libspdk_trace_parser.so.6.0 00:04:33.615 SYMLINK libspdk_util.so 00:04:33.615 SYMLINK libspdk_trace_parser.so 00:04:33.615 CC lib/vmd/vmd.o 00:04:33.615 CC lib/vmd/led.o 00:04:33.615 CC lib/env_dpdk/env.o 00:04:33.615 CC lib/rdma_utils/rdma_utils.o 00:04:33.615 CC lib/conf/conf.o 00:04:33.615 CC lib/env_dpdk/memory.o 00:04:33.615 CC lib/env_dpdk/init.o 00:04:33.615 CC lib/json/json_parse.o 00:04:33.615 CC lib/idxd/idxd.o 00:04:33.615 CC lib/env_dpdk/pci.o 00:04:33.874 CC lib/json/json_util.o 00:04:33.874 LIB libspdk_conf.a 00:04:33.874 CC lib/json/json_write.o 00:04:33.874 SO libspdk_conf.so.6.0 00:04:33.874 LIB libspdk_rdma_utils.a 00:04:33.874 SO libspdk_rdma_utils.so.1.0 00:04:33.874 SYMLINK libspdk_conf.so 00:04:33.874 CC lib/idxd/idxd_user.o 00:04:33.874 CC lib/env_dpdk/threads.o 00:04:33.874 SYMLINK libspdk_rdma_utils.so 00:04:33.874 CC lib/env_dpdk/pci_ioat.o 00:04:34.133 CC lib/env_dpdk/pci_virtio.o 00:04:34.133 CC lib/env_dpdk/pci_vmd.o 00:04:34.133 CC lib/env_dpdk/pci_idxd.o 00:04:34.133 CC lib/env_dpdk/pci_event.o 00:04:34.133 CC lib/rdma_provider/common.o 00:04:34.133 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:34.133 LIB libspdk_json.a 00:04:34.133 CC lib/idxd/idxd_kernel.o 00:04:34.133 LIB libspdk_vmd.a 00:04:34.133 SO libspdk_json.so.6.0 00:04:34.133 CC lib/env_dpdk/sigbus_handler.o 00:04:34.133 CC lib/env_dpdk/pci_dpdk.o 00:04:34.133 SO libspdk_vmd.so.6.0 00:04:34.133 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:34.133 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:34.392 SYMLINK libspdk_json.so 00:04:34.392 SYMLINK libspdk_vmd.so 00:04:34.392 LIB libspdk_idxd.a 00:04:34.392 LIB libspdk_rdma_provider.a 00:04:34.392 SO libspdk_idxd.so.12.1 00:04:34.392 SO libspdk_rdma_provider.so.7.0 00:04:34.392 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.392 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.392 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.392 SYMLINK libspdk_rdma_provider.so 00:04:34.392 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.392 SYMLINK libspdk_idxd.so 00:04:34.651 LIB libspdk_jsonrpc.a 00:04:34.651 SO libspdk_jsonrpc.so.6.0 00:04:34.651 SYMLINK libspdk_jsonrpc.so 00:04:34.910 CC lib/rpc/rpc.o 00:04:35.168 LIB libspdk_env_dpdk.a 00:04:35.168 SO libspdk_env_dpdk.so.15.1 00:04:35.168 LIB libspdk_rpc.a 00:04:35.168 SO libspdk_rpc.so.6.0 00:04:35.168 SYMLINK libspdk_env_dpdk.so 00:04:35.168 SYMLINK libspdk_rpc.so 00:04:35.427 CC lib/keyring/keyring.o 00:04:35.427 CC lib/keyring/keyring_rpc.o 00:04:35.427 CC lib/trace/trace.o 00:04:35.427 CC lib/trace/trace_rpc.o 00:04:35.427 CC lib/trace/trace_flags.o 00:04:35.427 CC lib/notify/notify.o 00:04:35.427 CC lib/notify/notify_rpc.o 00:04:35.685 LIB libspdk_notify.a 00:04:35.685 SO libspdk_notify.so.6.0 00:04:35.685 LIB libspdk_keyring.a 00:04:35.685 SYMLINK libspdk_notify.so 00:04:35.685 SO libspdk_keyring.so.2.0 00:04:35.685 LIB libspdk_trace.a 00:04:35.685 SO libspdk_trace.so.11.0 00:04:35.685 SYMLINK libspdk_keyring.so 00:04:35.944 SYMLINK libspdk_trace.so 00:04:35.944 CC lib/sock/sock.o 00:04:35.944 CC lib/sock/sock_rpc.o 00:04:35.944 CC lib/thread/thread.o 00:04:35.944 CC lib/thread/iobuf.o 00:04:36.510 LIB libspdk_sock.a 00:04:36.510 SO libspdk_sock.so.10.0 00:04:36.510 SYMLINK libspdk_sock.so 00:04:36.768 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:36.768 CC lib/nvme/nvme_ns.o 00:04:36.768 CC lib/nvme/nvme_fabric.o 00:04:36.768 CC lib/nvme/nvme_ctrlr.o 00:04:36.768 CC lib/nvme/nvme_ns_cmd.o 00:04:36.769 CC lib/nvme/nvme_qpair.o 00:04:36.769 CC lib/nvme/nvme.o 00:04:36.769 CC lib/nvme/nvme_pcie_common.o 00:04:36.769 CC lib/nvme/nvme_pcie.o 00:04:37.336 CC lib/nvme/nvme_quirks.o 00:04:37.336 LIB libspdk_thread.a 00:04:37.336 SO libspdk_thread.so.11.0 00:04:37.336 CC lib/nvme/nvme_transport.o 00:04:37.336 SYMLINK libspdk_thread.so 00:04:37.336 CC lib/nvme/nvme_discovery.o 00:04:37.595 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.595 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.595 CC lib/nvme/nvme_tcp.o 00:04:37.595 CC lib/nvme/nvme_opal.o 00:04:37.595 CC lib/nvme/nvme_io_msg.o 00:04:37.595 CC lib/nvme/nvme_poll_group.o 00:04:37.853 CC lib/nvme/nvme_zns.o 00:04:37.853 CC lib/nvme/nvme_stubs.o 00:04:37.853 CC lib/nvme/nvme_auth.o 00:04:38.112 CC lib/nvme/nvme_cuse.o 00:04:38.112 CC lib/nvme/nvme_rdma.o 00:04:38.112 CC lib/accel/accel.o 00:04:38.370 CC lib/blob/blobstore.o 00:04:38.370 CC lib/init/json_config.o 00:04:38.370 CC lib/blob/request.o 00:04:38.370 CC lib/virtio/virtio.o 00:04:38.630 CC lib/init/subsystem.o 00:04:38.630 CC lib/init/subsystem_rpc.o 00:04:38.630 CC lib/init/rpc.o 00:04:38.630 CC lib/virtio/virtio_vhost_user.o 00:04:38.630 CC lib/accel/accel_rpc.o 00:04:38.888 CC lib/accel/accel_sw.o 00:04:38.888 LIB libspdk_init.a 00:04:38.888 SO libspdk_init.so.6.0 00:04:38.888 CC lib/fsdev/fsdev.o 00:04:38.888 SYMLINK libspdk_init.so 00:04:38.888 CC lib/virtio/virtio_vfio_user.o 00:04:38.888 CC lib/virtio/virtio_pci.o 00:04:38.888 CC lib/blob/zeroes.o 00:04:38.888 CC lib/fsdev/fsdev_io.o 00:04:38.888 CC lib/fsdev/fsdev_rpc.o 00:04:38.888 CC lib/blob/blob_bs_dev.o 00:04:39.147 LIB libspdk_virtio.a 00:04:39.147 CC lib/event/app_rpc.o 00:04:39.147 CC lib/event/app.o 00:04:39.147 CC lib/event/log_rpc.o 00:04:39.147 CC lib/event/reactor.o 00:04:39.147 CC lib/event/scheduler_static.o 00:04:39.147 SO libspdk_virtio.so.7.0 00:04:39.406 SYMLINK libspdk_virtio.so 00:04:39.406 LIB libspdk_nvme.a 00:04:39.406 LIB libspdk_accel.a 00:04:39.406 LIB libspdk_fsdev.a 00:04:39.406 SO libspdk_accel.so.16.0 00:04:39.406 SO libspdk_fsdev.so.2.0 00:04:39.406 SYMLINK libspdk_accel.so 00:04:39.406 SO libspdk_nvme.so.15.0 00:04:39.664 SYMLINK libspdk_fsdev.so 00:04:39.664 LIB libspdk_event.a 00:04:39.664 SO libspdk_event.so.14.0 00:04:39.664 CC lib/bdev/bdev.o 00:04:39.664 CC lib/bdev/bdev_rpc.o 00:04:39.664 CC lib/bdev/bdev_zone.o 00:04:39.664 CC lib/bdev/scsi_nvme.o 00:04:39.664 CC lib/bdev/part.o 00:04:39.664 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:39.664 SYMLINK libspdk_event.so 00:04:39.664 SYMLINK libspdk_nvme.so 00:04:40.601 LIB libspdk_fuse_dispatcher.a 00:04:40.601 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.601 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.175 LIB libspdk_blob.a 00:04:41.175 SO libspdk_blob.so.12.0 00:04:41.433 SYMLINK libspdk_blob.so 00:04:41.433 CC lib/blobfs/tree.o 00:04:41.434 CC lib/blobfs/blobfs.o 00:04:41.434 CC lib/lvol/lvol.o 00:04:42.372 LIB libspdk_blobfs.a 00:04:42.372 LIB libspdk_bdev.a 00:04:42.372 SO libspdk_blobfs.so.11.0 00:04:42.372 SYMLINK libspdk_blobfs.so 00:04:42.372 SO libspdk_bdev.so.17.0 00:04:42.372 LIB libspdk_lvol.a 00:04:42.637 SO libspdk_lvol.so.11.0 00:04:42.637 SYMLINK libspdk_bdev.so 00:04:42.637 SYMLINK libspdk_lvol.so 00:04:42.637 CC lib/scsi/dev.o 00:04:42.637 CC lib/scsi/lun.o 00:04:42.637 CC lib/scsi/port.o 00:04:42.637 CC lib/scsi/scsi.o 00:04:42.637 CC lib/scsi/scsi_bdev.o 00:04:42.637 CC lib/scsi/scsi_pr.o 00:04:42.637 CC lib/ftl/ftl_core.o 00:04:42.637 CC lib/nbd/nbd.o 00:04:42.637 CC lib/nvmf/ctrlr.o 00:04:42.637 CC lib/ublk/ublk.o 00:04:42.896 CC lib/ublk/ublk_rpc.o 00:04:42.896 CC lib/nvmf/ctrlr_discovery.o 00:04:42.896 CC lib/nvmf/ctrlr_bdev.o 00:04:42.896 CC lib/nvmf/subsystem.o 00:04:42.896 CC lib/nvmf/nvmf.o 00:04:43.154 CC lib/scsi/scsi_rpc.o 00:04:43.154 CC lib/ftl/ftl_init.o 00:04:43.154 CC lib/nbd/nbd_rpc.o 00:04:43.154 CC lib/scsi/task.o 00:04:43.154 CC lib/ftl/ftl_layout.o 00:04:43.412 CC lib/ftl/ftl_debug.o 00:04:43.412 LIB libspdk_nbd.a 00:04:43.412 CC lib/nvmf/nvmf_rpc.o 00:04:43.412 SO libspdk_nbd.so.7.0 00:04:43.412 LIB libspdk_ublk.a 00:04:43.412 LIB libspdk_scsi.a 00:04:43.412 SYMLINK libspdk_nbd.so 00:04:43.412 CC lib/nvmf/transport.o 00:04:43.412 SO libspdk_ublk.so.3.0 00:04:43.412 SO libspdk_scsi.so.9.0 00:04:43.412 SYMLINK libspdk_ublk.so 00:04:43.412 CC lib/nvmf/tcp.o 00:04:43.412 SYMLINK libspdk_scsi.so 00:04:43.412 CC lib/nvmf/stubs.o 00:04:43.412 CC lib/nvmf/mdns_server.o 00:04:43.671 CC lib/ftl/ftl_io.o 00:04:43.671 CC lib/nvmf/rdma.o 00:04:43.671 CC lib/ftl/ftl_sb.o 00:04:43.929 CC lib/nvmf/auth.o 00:04:43.929 CC lib/ftl/ftl_l2p.o 00:04:43.929 CC lib/ftl/ftl_l2p_flat.o 00:04:43.929 CC lib/ftl/ftl_nv_cache.o 00:04:44.187 CC lib/iscsi/conn.o 00:04:44.187 CC lib/iscsi/init_grp.o 00:04:44.187 CC lib/iscsi/iscsi.o 00:04:44.187 CC lib/iscsi/param.o 00:04:44.187 CC lib/iscsi/portal_grp.o 00:04:44.445 CC lib/iscsi/tgt_node.o 00:04:44.445 CC lib/iscsi/iscsi_subsystem.o 00:04:44.445 CC lib/iscsi/iscsi_rpc.o 00:04:44.703 CC lib/iscsi/task.o 00:04:44.703 CC lib/ftl/ftl_band.o 00:04:44.703 CC lib/ftl/ftl_band_ops.o 00:04:44.703 CC lib/vhost/vhost.o 00:04:44.703 CC lib/ftl/ftl_writer.o 00:04:44.703 CC lib/vhost/vhost_rpc.o 00:04:44.962 CC lib/vhost/vhost_scsi.o 00:04:44.962 CC lib/ftl/ftl_rq.o 00:04:44.962 CC lib/ftl/ftl_reloc.o 00:04:45.220 CC lib/ftl/ftl_l2p_cache.o 00:04:45.220 CC lib/ftl/ftl_p2l.o 00:04:45.220 CC lib/ftl/ftl_p2l_log.o 00:04:45.220 CC lib/ftl/mngt/ftl_mngt.o 00:04:45.479 CC lib/vhost/vhost_blk.o 00:04:45.479 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.479 CC lib/vhost/rte_vhost_user.o 00:04:45.479 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.479 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.479 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.479 LIB libspdk_iscsi.a 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.737 SO libspdk_iscsi.so.8.0 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.737 SYMLINK libspdk_iscsi.so 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.737 LIB libspdk_nvmf.a 00:04:45.737 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.737 CC lib/ftl/utils/ftl_conf.o 00:04:45.996 CC lib/ftl/utils/ftl_md.o 00:04:45.996 CC lib/ftl/utils/ftl_mempool.o 00:04:45.996 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.996 SO libspdk_nvmf.so.20.0 00:04:45.996 CC lib/ftl/utils/ftl_property.o 00:04:45.996 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.996 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.996 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.255 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.255 SYMLINK libspdk_nvmf.so 00:04:46.255 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.255 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.255 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.255 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.255 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.255 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.255 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.255 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.255 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.255 CC lib/ftl/base/ftl_base_dev.o 00:04:46.255 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.513 CC lib/ftl/ftl_trace.o 00:04:46.513 LIB libspdk_vhost.a 00:04:46.513 SO libspdk_vhost.so.8.0 00:04:46.513 SYMLINK libspdk_vhost.so 00:04:46.513 LIB libspdk_ftl.a 00:04:46.771 SO libspdk_ftl.so.9.0 00:04:47.030 SYMLINK libspdk_ftl.so 00:04:47.288 CC module/env_dpdk/env_dpdk_rpc.o 00:04:47.288 CC module/accel/error/accel_error.o 00:04:47.288 CC module/accel/iaa/accel_iaa.o 00:04:47.288 CC module/sock/posix/posix.o 00:04:47.288 CC module/accel/ioat/accel_ioat.o 00:04:47.288 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:47.288 CC module/blob/bdev/blob_bdev.o 00:04:47.288 CC module/accel/dsa/accel_dsa.o 00:04:47.288 CC module/keyring/file/keyring.o 00:04:47.576 CC module/fsdev/aio/fsdev_aio.o 00:04:47.576 LIB libspdk_env_dpdk_rpc.a 00:04:47.576 SO libspdk_env_dpdk_rpc.so.6.0 00:04:47.576 SYMLINK libspdk_env_dpdk_rpc.so 00:04:47.576 CC module/keyring/file/keyring_rpc.o 00:04:47.576 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.576 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.576 CC module/accel/error/accel_error_rpc.o 00:04:47.576 LIB libspdk_scheduler_dynamic.a 00:04:47.576 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.576 LIB libspdk_keyring_file.a 00:04:47.576 LIB libspdk_blob_bdev.a 00:04:47.576 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.846 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.846 CC module/keyring/linux/keyring.o 00:04:47.846 SO libspdk_keyring_file.so.2.0 00:04:47.846 SO libspdk_blob_bdev.so.12.0 00:04:47.846 LIB libspdk_accel_ioat.a 00:04:47.846 LIB libspdk_accel_iaa.a 00:04:47.846 LIB libspdk_accel_error.a 00:04:47.846 SO libspdk_accel_ioat.so.6.0 00:04:47.846 SO libspdk_accel_iaa.so.3.0 00:04:47.846 SO libspdk_accel_error.so.2.0 00:04:47.846 SYMLINK libspdk_keyring_file.so 00:04:47.846 SYMLINK libspdk_blob_bdev.so 00:04:47.846 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:47.846 CC module/keyring/linux/keyring_rpc.o 00:04:47.846 SYMLINK libspdk_accel_ioat.so 00:04:47.846 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.846 SYMLINK libspdk_accel_iaa.so 00:04:47.846 SYMLINK libspdk_accel_error.so 00:04:47.846 LIB libspdk_accel_dsa.a 00:04:47.846 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.846 SO libspdk_accel_dsa.so.5.0 00:04:47.846 LIB libspdk_keyring_linux.a 00:04:47.846 SYMLINK libspdk_accel_dsa.so 00:04:47.846 SO libspdk_keyring_linux.so.1.0 00:04:47.846 CC module/scheduler/gscheduler/gscheduler.o 00:04:48.104 LIB libspdk_scheduler_dpdk_governor.a 00:04:48.104 SYMLINK libspdk_keyring_linux.so 00:04:48.104 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:48.104 CC module/bdev/delay/vbdev_delay.o 00:04:48.104 CC module/blobfs/bdev/blobfs_bdev.o 00:04:48.104 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:48.104 CC module/bdev/gpt/gpt.o 00:04:48.104 CC module/bdev/error/vbdev_error.o 00:04:48.104 LIB libspdk_scheduler_gscheduler.a 00:04:48.104 SO libspdk_scheduler_gscheduler.so.4.0 00:04:48.104 LIB libspdk_fsdev_aio.a 00:04:48.104 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.104 CC module/bdev/malloc/bdev_malloc.o 00:04:48.104 SO libspdk_fsdev_aio.so.1.0 00:04:48.104 SYMLINK libspdk_scheduler_gscheduler.so 00:04:48.104 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.104 LIB libspdk_sock_posix.a 00:04:48.104 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.104 SO libspdk_sock_posix.so.6.0 00:04:48.104 CC module/bdev/null/bdev_null.o 00:04:48.363 SYMLINK libspdk_fsdev_aio.so 00:04:48.363 CC module/bdev/null/bdev_null_rpc.o 00:04:48.363 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.363 SYMLINK libspdk_sock_posix.so 00:04:48.363 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.363 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.363 LIB libspdk_blobfs_bdev.a 00:04:48.363 SO libspdk_blobfs_bdev.so.6.0 00:04:48.363 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.363 SYMLINK libspdk_blobfs_bdev.so 00:04:48.363 LIB libspdk_bdev_gpt.a 00:04:48.363 LIB libspdk_bdev_error.a 00:04:48.363 SO libspdk_bdev_error.so.6.0 00:04:48.363 SO libspdk_bdev_gpt.so.6.0 00:04:48.363 LIB libspdk_bdev_null.a 00:04:48.621 SO libspdk_bdev_null.so.6.0 00:04:48.621 LIB libspdk_bdev_malloc.a 00:04:48.621 SYMLINK libspdk_bdev_gpt.so 00:04:48.621 SYMLINK libspdk_bdev_error.so 00:04:48.621 SO libspdk_bdev_malloc.so.6.0 00:04:48.621 CC module/bdev/nvme/bdev_nvme.o 00:04:48.621 LIB libspdk_bdev_delay.a 00:04:48.621 SYMLINK libspdk_bdev_null.so 00:04:48.621 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.621 SO libspdk_bdev_delay.so.6.0 00:04:48.621 SYMLINK libspdk_bdev_malloc.so 00:04:48.621 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.621 CC module/bdev/raid/bdev_raid.o 00:04:48.621 SYMLINK libspdk_bdev_delay.so 00:04:48.621 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.621 LIB libspdk_bdev_lvol.a 00:04:48.621 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.621 CC module/bdev/split/vbdev_split.o 00:04:48.621 SO libspdk_bdev_lvol.so.6.0 00:04:48.621 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.879 SYMLINK libspdk_bdev_lvol.so 00:04:48.879 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.879 CC module/bdev/aio/bdev_aio.o 00:04:48.879 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.879 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.879 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.879 LIB libspdk_bdev_split.a 00:04:48.879 SO libspdk_bdev_split.so.6.0 00:04:48.879 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.879 SYMLINK libspdk_bdev_split.so 00:04:49.138 LIB libspdk_bdev_passthru.a 00:04:49.138 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:49.138 SO libspdk_bdev_passthru.so.6.0 00:04:49.138 LIB libspdk_bdev_zone_block.a 00:04:49.138 SYMLINK libspdk_bdev_passthru.so 00:04:49.138 SO libspdk_bdev_zone_block.so.6.0 00:04:49.138 CC module/bdev/ftl/bdev_ftl.o 00:04:49.138 CC module/bdev/raid/raid0.o 00:04:49.138 SYMLINK libspdk_bdev_zone_block.so 00:04:49.138 CC module/bdev/iscsi/bdev_iscsi.o 00:04:49.138 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.138 LIB libspdk_bdev_xnvme.a 00:04:49.138 LIB libspdk_bdev_aio.a 00:04:49.138 SO libspdk_bdev_xnvme.so.3.0 00:04:49.138 SO libspdk_bdev_aio.so.6.0 00:04:49.138 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:49.138 SYMLINK libspdk_bdev_xnvme.so 00:04:49.138 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:49.396 SYMLINK libspdk_bdev_aio.so 00:04:49.396 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:49.396 CC module/bdev/raid/raid1.o 00:04:49.396 CC module/bdev/raid/concat.o 00:04:49.396 CC module/bdev/nvme/nvme_rpc.o 00:04:49.396 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:49.396 CC module/bdev/nvme/bdev_mdns_client.o 00:04:49.396 LIB libspdk_bdev_iscsi.a 00:04:49.655 LIB libspdk_bdev_ftl.a 00:04:49.655 SO libspdk_bdev_iscsi.so.6.0 00:04:49.655 SYMLINK libspdk_bdev_iscsi.so 00:04:49.655 SO libspdk_bdev_ftl.so.6.0 00:04:49.655 CC module/bdev/nvme/vbdev_opal.o 00:04:49.655 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.655 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:49.655 SYMLINK libspdk_bdev_ftl.so 00:04:49.655 LIB libspdk_bdev_virtio.a 00:04:49.655 SO libspdk_bdev_virtio.so.6.0 00:04:49.655 LIB libspdk_bdev_raid.a 00:04:49.913 SYMLINK libspdk_bdev_virtio.so 00:04:49.913 SO libspdk_bdev_raid.so.6.0 00:04:49.913 SYMLINK libspdk_bdev_raid.so 00:04:51.289 LIB libspdk_bdev_nvme.a 00:04:51.289 SO libspdk_bdev_nvme.so.7.1 00:04:51.550 SYMLINK libspdk_bdev_nvme.so 00:04:51.809 CC module/event/subsystems/iobuf/iobuf.o 00:04:51.809 CC module/event/subsystems/keyring/keyring.o 00:04:51.809 CC module/event/subsystems/sock/sock.o 00:04:51.809 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:51.809 CC module/event/subsystems/scheduler/scheduler.o 00:04:51.809 CC module/event/subsystems/fsdev/fsdev.o 00:04:51.809 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:51.809 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:51.809 CC module/event/subsystems/vmd/vmd.o 00:04:51.809 LIB libspdk_event_vhost_blk.a 00:04:51.809 LIB libspdk_event_keyring.a 00:04:51.809 SO libspdk_event_vhost_blk.so.3.0 00:04:51.809 LIB libspdk_event_sock.a 00:04:51.809 LIB libspdk_event_fsdev.a 00:04:51.809 LIB libspdk_event_scheduler.a 00:04:52.066 LIB libspdk_event_iobuf.a 00:04:52.066 LIB libspdk_event_vmd.a 00:04:52.066 SO libspdk_event_keyring.so.1.0 00:04:52.066 SO libspdk_event_fsdev.so.1.0 00:04:52.066 SO libspdk_event_sock.so.5.0 00:04:52.066 SO libspdk_event_scheduler.so.4.0 00:04:52.066 SO libspdk_event_iobuf.so.3.0 00:04:52.066 SO libspdk_event_vmd.so.6.0 00:04:52.066 SYMLINK libspdk_event_vhost_blk.so 00:04:52.066 SYMLINK libspdk_event_sock.so 00:04:52.066 SYMLINK libspdk_event_keyring.so 00:04:52.066 SYMLINK libspdk_event_fsdev.so 00:04:52.066 SYMLINK libspdk_event_scheduler.so 00:04:52.066 SYMLINK libspdk_event_iobuf.so 00:04:52.066 SYMLINK libspdk_event_vmd.so 00:04:52.323 CC module/event/subsystems/accel/accel.o 00:04:52.323 LIB libspdk_event_accel.a 00:04:52.323 SO libspdk_event_accel.so.6.0 00:04:52.586 SYMLINK libspdk_event_accel.so 00:04:52.586 CC module/event/subsystems/bdev/bdev.o 00:04:52.845 LIB libspdk_event_bdev.a 00:04:52.845 SO libspdk_event_bdev.so.6.0 00:04:52.845 SYMLINK libspdk_event_bdev.so 00:04:53.103 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:53.103 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:53.103 CC module/event/subsystems/nbd/nbd.o 00:04:53.103 CC module/event/subsystems/ublk/ublk.o 00:04:53.103 CC module/event/subsystems/scsi/scsi.o 00:04:53.103 LIB libspdk_event_ublk.a 00:04:53.360 LIB libspdk_event_nbd.a 00:04:53.360 LIB libspdk_event_scsi.a 00:04:53.360 SO libspdk_event_ublk.so.3.0 00:04:53.360 SO libspdk_event_nbd.so.6.0 00:04:53.360 SO libspdk_event_scsi.so.6.0 00:04:53.360 SYMLINK libspdk_event_ublk.so 00:04:53.360 SYMLINK libspdk_event_nbd.so 00:04:53.360 SYMLINK libspdk_event_scsi.so 00:04:53.360 LIB libspdk_event_nvmf.a 00:04:53.360 SO libspdk_event_nvmf.so.6.0 00:04:53.360 SYMLINK libspdk_event_nvmf.so 00:04:53.617 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:53.617 CC module/event/subsystems/iscsi/iscsi.o 00:04:53.617 LIB libspdk_event_vhost_scsi.a 00:04:53.617 LIB libspdk_event_iscsi.a 00:04:53.617 SO libspdk_event_vhost_scsi.so.3.0 00:04:53.617 SO libspdk_event_iscsi.so.6.0 00:04:53.617 SYMLINK libspdk_event_vhost_scsi.so 00:04:53.617 SYMLINK libspdk_event_iscsi.so 00:04:53.875 SO libspdk.so.6.0 00:04:53.875 SYMLINK libspdk.so 00:04:54.133 CC app/trace_record/trace_record.o 00:04:54.133 CC app/spdk_lspci/spdk_lspci.o 00:04:54.133 CXX app/trace/trace.o 00:04:54.133 CC app/spdk_nvme_perf/perf.o 00:04:54.133 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.133 CC app/nvmf_tgt/nvmf_main.o 00:04:54.133 CC app/spdk_tgt/spdk_tgt.o 00:04:54.133 CC test/thread/poller_perf/poller_perf.o 00:04:54.133 CC examples/util/zipf/zipf.o 00:04:54.133 CC examples/ioat/perf/perf.o 00:04:54.133 LINK spdk_lspci 00:04:54.391 LINK iscsi_tgt 00:04:54.391 LINK spdk_tgt 00:04:54.391 LINK nvmf_tgt 00:04:54.391 LINK poller_perf 00:04:54.391 LINK spdk_trace_record 00:04:54.391 LINK zipf 00:04:54.391 LINK ioat_perf 00:04:54.391 CC app/spdk_nvme_identify/identify.o 00:04:54.391 LINK spdk_trace 00:04:54.649 TEST_HEADER include/spdk/accel.h 00:04:54.649 TEST_HEADER include/spdk/accel_module.h 00:04:54.649 TEST_HEADER include/spdk/assert.h 00:04:54.649 TEST_HEADER include/spdk/barrier.h 00:04:54.649 TEST_HEADER include/spdk/base64.h 00:04:54.649 TEST_HEADER include/spdk/bdev.h 00:04:54.649 TEST_HEADER include/spdk/bdev_module.h 00:04:54.649 CC examples/ioat/verify/verify.o 00:04:54.649 TEST_HEADER include/spdk/bdev_zone.h 00:04:54.649 TEST_HEADER include/spdk/bit_array.h 00:04:54.649 TEST_HEADER include/spdk/bit_pool.h 00:04:54.649 TEST_HEADER include/spdk/blob_bdev.h 00:04:54.649 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:54.649 TEST_HEADER include/spdk/blobfs.h 00:04:54.649 TEST_HEADER include/spdk/blob.h 00:04:54.649 TEST_HEADER include/spdk/conf.h 00:04:54.649 TEST_HEADER include/spdk/config.h 00:04:54.649 TEST_HEADER include/spdk/cpuset.h 00:04:54.649 TEST_HEADER include/spdk/crc16.h 00:04:54.649 CC app/spdk_nvme_discover/discovery_aer.o 00:04:54.649 TEST_HEADER include/spdk/crc32.h 00:04:54.649 TEST_HEADER include/spdk/crc64.h 00:04:54.649 TEST_HEADER include/spdk/dif.h 00:04:54.649 TEST_HEADER include/spdk/dma.h 00:04:54.649 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:54.649 TEST_HEADER include/spdk/endian.h 00:04:54.649 CC test/dma/test_dma/test_dma.o 00:04:54.649 TEST_HEADER include/spdk/env_dpdk.h 00:04:54.649 CC app/spdk_top/spdk_top.o 00:04:54.649 TEST_HEADER include/spdk/env.h 00:04:54.649 TEST_HEADER include/spdk/event.h 00:04:54.649 TEST_HEADER include/spdk/fd_group.h 00:04:54.649 TEST_HEADER include/spdk/fd.h 00:04:54.649 TEST_HEADER include/spdk/file.h 00:04:54.649 TEST_HEADER include/spdk/fsdev.h 00:04:54.649 TEST_HEADER include/spdk/fsdev_module.h 00:04:54.649 TEST_HEADER include/spdk/ftl.h 00:04:54.649 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:54.649 TEST_HEADER include/spdk/gpt_spec.h 00:04:54.649 TEST_HEADER include/spdk/hexlify.h 00:04:54.649 TEST_HEADER include/spdk/histogram_data.h 00:04:54.649 TEST_HEADER include/spdk/idxd.h 00:04:54.649 TEST_HEADER include/spdk/idxd_spec.h 00:04:54.649 CC test/app/bdev_svc/bdev_svc.o 00:04:54.649 TEST_HEADER include/spdk/init.h 00:04:54.649 TEST_HEADER include/spdk/ioat.h 00:04:54.649 TEST_HEADER include/spdk/ioat_spec.h 00:04:54.649 TEST_HEADER include/spdk/iscsi_spec.h 00:04:54.649 TEST_HEADER include/spdk/json.h 00:04:54.649 TEST_HEADER include/spdk/jsonrpc.h 00:04:54.649 TEST_HEADER include/spdk/keyring.h 00:04:54.649 TEST_HEADER include/spdk/keyring_module.h 00:04:54.649 TEST_HEADER include/spdk/likely.h 00:04:54.649 TEST_HEADER include/spdk/log.h 00:04:54.649 TEST_HEADER include/spdk/lvol.h 00:04:54.649 TEST_HEADER include/spdk/md5.h 00:04:54.649 TEST_HEADER include/spdk/memory.h 00:04:54.649 TEST_HEADER include/spdk/mmio.h 00:04:54.649 TEST_HEADER include/spdk/nbd.h 00:04:54.649 TEST_HEADER include/spdk/net.h 00:04:54.649 TEST_HEADER include/spdk/notify.h 00:04:54.650 TEST_HEADER include/spdk/nvme.h 00:04:54.650 TEST_HEADER include/spdk/nvme_intel.h 00:04:54.650 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:54.650 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:54.650 TEST_HEADER include/spdk/nvme_spec.h 00:04:54.650 TEST_HEADER include/spdk/nvme_zns.h 00:04:54.650 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:54.650 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:54.650 TEST_HEADER include/spdk/nvmf.h 00:04:54.650 TEST_HEADER include/spdk/nvmf_spec.h 00:04:54.650 TEST_HEADER include/spdk/nvmf_transport.h 00:04:54.650 TEST_HEADER include/spdk/opal.h 00:04:54.650 TEST_HEADER include/spdk/opal_spec.h 00:04:54.650 CC app/spdk_dd/spdk_dd.o 00:04:54.650 TEST_HEADER include/spdk/pci_ids.h 00:04:54.650 TEST_HEADER include/spdk/pipe.h 00:04:54.650 TEST_HEADER include/spdk/queue.h 00:04:54.650 TEST_HEADER include/spdk/reduce.h 00:04:54.650 TEST_HEADER include/spdk/rpc.h 00:04:54.650 TEST_HEADER include/spdk/scheduler.h 00:04:54.650 TEST_HEADER include/spdk/scsi.h 00:04:54.650 TEST_HEADER include/spdk/scsi_spec.h 00:04:54.650 TEST_HEADER include/spdk/sock.h 00:04:54.650 TEST_HEADER include/spdk/stdinc.h 00:04:54.650 TEST_HEADER include/spdk/string.h 00:04:54.650 TEST_HEADER include/spdk/thread.h 00:04:54.650 TEST_HEADER include/spdk/trace.h 00:04:54.650 TEST_HEADER include/spdk/trace_parser.h 00:04:54.650 TEST_HEADER include/spdk/tree.h 00:04:54.650 TEST_HEADER include/spdk/ublk.h 00:04:54.650 TEST_HEADER include/spdk/util.h 00:04:54.650 TEST_HEADER include/spdk/uuid.h 00:04:54.650 TEST_HEADER include/spdk/version.h 00:04:54.650 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:54.650 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:54.650 TEST_HEADER include/spdk/vhost.h 00:04:54.650 TEST_HEADER include/spdk/vmd.h 00:04:54.650 LINK spdk_nvme_perf 00:04:54.650 TEST_HEADER include/spdk/xor.h 00:04:54.650 TEST_HEADER include/spdk/zipf.h 00:04:54.650 CXX test/cpp_headers/accel.o 00:04:54.650 LINK spdk_nvme_discover 00:04:54.908 LINK interrupt_tgt 00:04:54.908 LINK verify 00:04:54.908 LINK bdev_svc 00:04:54.908 CXX test/cpp_headers/accel_module.o 00:04:54.908 CXX test/cpp_headers/assert.o 00:04:54.908 CC test/app/histogram_perf/histogram_perf.o 00:04:54.908 CC test/app/jsoncat/jsoncat.o 00:04:54.908 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:55.166 CC test/app/stub/stub.o 00:04:55.166 LINK spdk_dd 00:04:55.166 CXX test/cpp_headers/barrier.o 00:04:55.166 LINK test_dma 00:04:55.166 LINK histogram_perf 00:04:55.166 LINK jsoncat 00:04:55.166 CC examples/thread/thread/thread_ex.o 00:04:55.166 LINK stub 00:04:55.166 CXX test/cpp_headers/base64.o 00:04:55.166 LINK spdk_nvme_identify 00:04:55.424 CXX test/cpp_headers/bdev.o 00:04:55.424 LINK thread 00:04:55.424 CC app/fio/nvme/fio_plugin.o 00:04:55.424 CC app/vhost/vhost.o 00:04:55.424 CC examples/sock/hello_world/hello_sock.o 00:04:55.424 CC test/event/event_perf/event_perf.o 00:04:55.424 LINK spdk_top 00:04:55.424 LINK nvme_fuzz 00:04:55.424 CC test/env/mem_callbacks/mem_callbacks.o 00:04:55.424 CC app/fio/bdev/fio_plugin.o 00:04:55.682 CXX test/cpp_headers/bdev_module.o 00:04:55.682 LINK event_perf 00:04:55.682 LINK vhost 00:04:55.682 CC test/event/reactor/reactor.o 00:04:55.682 LINK hello_sock 00:04:55.682 CC test/event/reactor_perf/reactor_perf.o 00:04:55.682 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:55.682 CXX test/cpp_headers/bdev_zone.o 00:04:55.682 CC test/env/vtophys/vtophys.o 00:04:55.682 LINK reactor 00:04:55.682 LINK reactor_perf 00:04:56.003 CC test/event/app_repeat/app_repeat.o 00:04:56.003 LINK spdk_nvme 00:04:56.003 CC examples/vmd/lsvmd/lsvmd.o 00:04:56.003 LINK vtophys 00:04:56.003 CXX test/cpp_headers/bit_array.o 00:04:56.003 CC examples/vmd/led/led.o 00:04:56.003 LINK app_repeat 00:04:56.003 LINK mem_callbacks 00:04:56.003 LINK spdk_bdev 00:04:56.003 LINK lsvmd 00:04:56.003 CC examples/idxd/perf/perf.o 00:04:56.267 LINK led 00:04:56.267 CXX test/cpp_headers/bit_pool.o 00:04:56.267 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:56.267 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.267 CC examples/accel/perf/accel_perf.o 00:04:56.267 CC test/env/memory/memory_ut.o 00:04:56.267 CC test/event/scheduler/scheduler.o 00:04:56.267 CXX test/cpp_headers/blob_bdev.o 00:04:56.267 CC test/env/pci/pci_ut.o 00:04:56.267 LINK env_dpdk_post_init 00:04:56.267 LINK hello_fsdev 00:04:56.526 LINK idxd_perf 00:04:56.526 CXX test/cpp_headers/blobfs_bdev.o 00:04:56.526 CC examples/blob/hello_world/hello_blob.o 00:04:56.526 CXX test/cpp_headers/blobfs.o 00:04:56.526 LINK scheduler 00:04:56.526 CXX test/cpp_headers/blob.o 00:04:56.526 CXX test/cpp_headers/conf.o 00:04:56.526 LINK pci_ut 00:04:56.526 CXX test/cpp_headers/config.o 00:04:56.526 CXX test/cpp_headers/cpuset.o 00:04:56.784 LINK hello_blob 00:04:56.784 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:56.784 CC test/nvme/aer/aer.o 00:04:56.784 LINK accel_perf 00:04:56.784 CC examples/blob/cli/blobcli.o 00:04:56.784 CXX test/cpp_headers/crc16.o 00:04:56.784 CC examples/nvme/hello_world/hello_world.o 00:04:56.784 CXX test/cpp_headers/crc32.o 00:04:56.784 CXX test/cpp_headers/crc64.o 00:04:56.785 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:57.043 LINK aer 00:04:57.043 CC test/nvme/reset/reset.o 00:04:57.043 CXX test/cpp_headers/dif.o 00:04:57.043 CC test/nvme/sgl/sgl.o 00:04:57.043 LINK hello_world 00:04:57.043 CXX test/cpp_headers/dma.o 00:04:57.043 CC examples/bdev/hello_world/hello_bdev.o 00:04:57.043 LINK blobcli 00:04:57.302 CXX test/cpp_headers/endian.o 00:04:57.302 CC examples/nvme/reconnect/reconnect.o 00:04:57.302 LINK reset 00:04:57.302 CC examples/bdev/bdevperf/bdevperf.o 00:04:57.302 LINK vhost_fuzz 00:04:57.302 LINK sgl 00:04:57.302 LINK hello_bdev 00:04:57.302 CXX test/cpp_headers/env_dpdk.o 00:04:57.302 LINK memory_ut 00:04:57.302 CXX test/cpp_headers/env.o 00:04:57.302 CC test/rpc_client/rpc_client_test.o 00:04:57.561 CC test/nvme/e2edp/nvme_dp.o 00:04:57.561 LINK iscsi_fuzz 00:04:57.561 LINK reconnect 00:04:57.561 CC examples/nvme/arbitration/arbitration.o 00:04:57.561 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.561 CXX test/cpp_headers/event.o 00:04:57.561 CC examples/nvme/hotplug/hotplug.o 00:04:57.561 LINK rpc_client_test 00:04:57.561 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:57.561 CXX test/cpp_headers/fd_group.o 00:04:57.820 LINK nvme_dp 00:04:57.820 CC test/nvme/overhead/overhead.o 00:04:57.820 CC test/nvme/err_injection/err_injection.o 00:04:57.820 LINK cmb_copy 00:04:57.820 CXX test/cpp_headers/fd.o 00:04:57.820 CC examples/nvme/abort/abort.o 00:04:57.820 LINK hotplug 00:04:57.820 LINK arbitration 00:04:57.820 CXX test/cpp_headers/file.o 00:04:57.820 LINK bdevperf 00:04:57.820 LINK nvme_manage 00:04:57.820 CXX test/cpp_headers/fsdev.o 00:04:57.820 CXX test/cpp_headers/fsdev_module.o 00:04:58.079 LINK err_injection 00:04:58.079 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.079 CXX test/cpp_headers/ftl.o 00:04:58.079 LINK overhead 00:04:58.079 CC test/nvme/startup/startup.o 00:04:58.079 CXX test/cpp_headers/fuse_dispatcher.o 00:04:58.079 LINK pmr_persistence 00:04:58.079 CC test/nvme/reserve/reserve.o 00:04:58.079 LINK abort 00:04:58.079 CC test/nvme/connect_stress/connect_stress.o 00:04:58.079 CC test/nvme/simple_copy/simple_copy.o 00:04:58.079 CC test/nvme/boot_partition/boot_partition.o 00:04:58.079 CXX test/cpp_headers/gpt_spec.o 00:04:58.338 CC test/nvme/compliance/nvme_compliance.o 00:04:58.338 LINK startup 00:04:58.338 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.338 CXX test/cpp_headers/hexlify.o 00:04:58.338 LINK reserve 00:04:58.338 LINK boot_partition 00:04:58.338 LINK connect_stress 00:04:58.338 LINK simple_copy 00:04:58.338 CXX test/cpp_headers/histogram_data.o 00:04:58.338 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.338 LINK fused_ordering 00:04:58.338 CC test/nvme/fdp/fdp.o 00:04:58.338 CC test/nvme/cuse/cuse.o 00:04:58.595 CC examples/nvmf/nvmf/nvmf.o 00:04:58.595 CXX test/cpp_headers/idxd.o 00:04:58.595 CXX test/cpp_headers/idxd_spec.o 00:04:58.595 LINK nvme_compliance 00:04:58.595 LINK doorbell_aers 00:04:58.595 CC test/accel/dif/dif.o 00:04:58.595 CC test/blobfs/mkfs/mkfs.o 00:04:58.595 CXX test/cpp_headers/init.o 00:04:58.595 CXX test/cpp_headers/ioat.o 00:04:58.595 LINK fdp 00:04:58.853 CXX test/cpp_headers/ioat_spec.o 00:04:58.853 CXX test/cpp_headers/iscsi_spec.o 00:04:58.853 CC test/lvol/esnap/esnap.o 00:04:58.853 LINK nvmf 00:04:58.853 CXX test/cpp_headers/json.o 00:04:58.853 LINK mkfs 00:04:58.853 CXX test/cpp_headers/jsonrpc.o 00:04:58.853 CXX test/cpp_headers/keyring.o 00:04:58.853 CXX test/cpp_headers/keyring_module.o 00:04:58.853 CXX test/cpp_headers/likely.o 00:04:58.853 CXX test/cpp_headers/log.o 00:04:58.853 CXX test/cpp_headers/lvol.o 00:04:58.853 CXX test/cpp_headers/md5.o 00:04:58.853 CXX test/cpp_headers/memory.o 00:04:59.111 CXX test/cpp_headers/mmio.o 00:04:59.111 CXX test/cpp_headers/nbd.o 00:04:59.111 CXX test/cpp_headers/net.o 00:04:59.111 CXX test/cpp_headers/notify.o 00:04:59.111 CXX test/cpp_headers/nvme.o 00:04:59.111 CXX test/cpp_headers/nvme_intel.o 00:04:59.111 CXX test/cpp_headers/nvme_ocssd.o 00:04:59.111 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:59.111 CXX test/cpp_headers/nvme_spec.o 00:04:59.111 CXX test/cpp_headers/nvme_zns.o 00:04:59.111 CXX test/cpp_headers/nvmf_cmd.o 00:04:59.111 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:59.111 CXX test/cpp_headers/nvmf.o 00:04:59.368 CXX test/cpp_headers/nvmf_spec.o 00:04:59.368 CXX test/cpp_headers/opal.o 00:04:59.368 CXX test/cpp_headers/nvmf_transport.o 00:04:59.368 CXX test/cpp_headers/opal_spec.o 00:04:59.368 LINK dif 00:04:59.368 CXX test/cpp_headers/pci_ids.o 00:04:59.368 CXX test/cpp_headers/pipe.o 00:04:59.368 CXX test/cpp_headers/queue.o 00:04:59.368 CXX test/cpp_headers/reduce.o 00:04:59.368 CXX test/cpp_headers/rpc.o 00:04:59.368 CXX test/cpp_headers/scheduler.o 00:04:59.368 CXX test/cpp_headers/scsi.o 00:04:59.368 CXX test/cpp_headers/scsi_spec.o 00:04:59.368 CXX test/cpp_headers/sock.o 00:04:59.626 CXX test/cpp_headers/stdinc.o 00:04:59.626 CXX test/cpp_headers/string.o 00:04:59.626 CXX test/cpp_headers/thread.o 00:04:59.626 CXX test/cpp_headers/trace.o 00:04:59.626 CXX test/cpp_headers/trace_parser.o 00:04:59.626 CXX test/cpp_headers/tree.o 00:04:59.626 CXX test/cpp_headers/ublk.o 00:04:59.626 CXX test/cpp_headers/util.o 00:04:59.626 CXX test/cpp_headers/uuid.o 00:04:59.626 LINK cuse 00:04:59.626 CC test/bdev/bdevio/bdevio.o 00:04:59.626 CXX test/cpp_headers/version.o 00:04:59.626 CXX test/cpp_headers/vfio_user_pci.o 00:04:59.884 CXX test/cpp_headers/vfio_user_spec.o 00:04:59.884 CXX test/cpp_headers/vhost.o 00:04:59.884 CXX test/cpp_headers/vmd.o 00:04:59.884 CXX test/cpp_headers/xor.o 00:04:59.884 CXX test/cpp_headers/zipf.o 00:05:00.143 LINK bdevio 00:05:03.430 LINK esnap 00:05:03.693 00:05:03.693 real 1m7.459s 00:05:03.693 user 5m23.001s 00:05:03.693 sys 0m57.531s 00:05:03.693 00:27:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:03.693 00:27:40 make -- common/autotest_common.sh@10 -- $ set +x 00:05:03.693 ************************************ 00:05:03.693 END TEST make 00:05:03.693 ************************************ 00:05:03.693 00:27:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:03.693 00:27:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:03.693 00:27:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:03.693 00:27:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.693 00:27:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:03.693 00:27:40 -- pm/common@44 -- $ pid=5805 00:05:03.693 00:27:40 -- pm/common@50 -- $ kill -TERM 5805 00:05:03.693 00:27:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.693 00:27:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:03.693 00:27:40 -- pm/common@44 -- $ pid=5806 00:05:03.693 00:27:40 -- pm/common@50 -- $ kill -TERM 5806 00:05:03.693 00:27:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:03.693 00:27:40 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:03.693 00:27:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.693 00:27:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.693 00:27:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.693 00:27:40 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.693 00:27:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.693 00:27:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.693 00:27:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.693 00:27:40 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.693 00:27:40 -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.693 00:27:40 -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.693 00:27:40 -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.693 00:27:40 -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.693 00:27:40 -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.693 00:27:40 -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.693 00:27:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.693 00:27:40 -- scripts/common.sh@344 -- # case "$op" in 00:05:03.693 00:27:40 -- scripts/common.sh@345 -- # : 1 00:05:03.693 00:27:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.693 00:27:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.693 00:27:40 -- scripts/common.sh@365 -- # decimal 1 00:05:03.693 00:27:40 -- scripts/common.sh@353 -- # local d=1 00:05:03.693 00:27:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.693 00:27:40 -- scripts/common.sh@355 -- # echo 1 00:05:03.693 00:27:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.693 00:27:40 -- scripts/common.sh@366 -- # decimal 2 00:05:03.693 00:27:40 -- scripts/common.sh@353 -- # local d=2 00:05:03.693 00:27:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.693 00:27:40 -- scripts/common.sh@355 -- # echo 2 00:05:03.693 00:27:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.693 00:27:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.693 00:27:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.693 00:27:40 -- scripts/common.sh@368 -- # return 0 00:05:03.693 00:27:40 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.693 00:27:40 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.693 --rc genhtml_branch_coverage=1 00:05:03.693 --rc genhtml_function_coverage=1 00:05:03.693 --rc genhtml_legend=1 00:05:03.693 --rc geninfo_all_blocks=1 00:05:03.693 --rc geninfo_unexecuted_blocks=1 00:05:03.693 00:05:03.693 ' 00:05:03.693 00:27:40 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.693 --rc genhtml_branch_coverage=1 00:05:03.693 --rc genhtml_function_coverage=1 00:05:03.693 --rc genhtml_legend=1 00:05:03.693 --rc geninfo_all_blocks=1 00:05:03.693 --rc geninfo_unexecuted_blocks=1 00:05:03.693 00:05:03.693 ' 00:05:03.693 00:27:40 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.693 --rc genhtml_branch_coverage=1 00:05:03.693 --rc genhtml_function_coverage=1 00:05:03.693 --rc genhtml_legend=1 00:05:03.693 --rc geninfo_all_blocks=1 00:05:03.693 --rc geninfo_unexecuted_blocks=1 00:05:03.693 00:05:03.693 ' 00:05:03.693 00:27:40 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.693 --rc genhtml_branch_coverage=1 00:05:03.693 --rc genhtml_function_coverage=1 00:05:03.693 --rc genhtml_legend=1 00:05:03.693 --rc geninfo_all_blocks=1 00:05:03.693 --rc geninfo_unexecuted_blocks=1 00:05:03.693 00:05:03.693 ' 00:05:03.693 00:27:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.693 00:27:40 -- nvmf/common.sh@7 -- # uname -s 00:05:03.954 00:27:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.954 00:27:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.954 00:27:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.954 00:27:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.954 00:27:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.954 00:27:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.954 00:27:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.954 00:27:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.954 00:27:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.954 00:27:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.954 00:27:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f838944b-503a-4293-87ba-5ffd451304f8 00:05:03.954 00:27:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=f838944b-503a-4293-87ba-5ffd451304f8 00:05:03.954 00:27:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.954 00:27:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.954 00:27:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.954 00:27:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.954 00:27:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.954 00:27:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.954 00:27:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.954 00:27:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.954 00:27:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.954 00:27:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.954 00:27:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.954 00:27:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.954 00:27:40 -- paths/export.sh@5 -- # export PATH 00:05:03.954 00:27:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.954 00:27:40 -- nvmf/common.sh@51 -- # : 0 00:05:03.954 00:27:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.954 00:27:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.954 00:27:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.954 00:27:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.954 00:27:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.954 00:27:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.954 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.954 00:27:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.954 00:27:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.954 00:27:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.954 00:27:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:03.954 00:27:40 -- spdk/autotest.sh@32 -- # uname -s 00:05:03.954 00:27:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:03.954 00:27:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:03.954 00:27:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.954 00:27:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:03.954 00:27:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:03.954 00:27:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:03.954 00:27:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:03.954 00:27:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:03.954 00:27:40 -- spdk/autotest.sh@48 -- # udevadm_pid=66672 00:05:03.954 00:27:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:03.954 00:27:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:03.954 00:27:40 -- pm/common@17 -- # local monitor 00:05:03.954 00:27:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.954 00:27:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.954 00:27:40 -- pm/common@25 -- # sleep 1 00:05:03.954 00:27:40 -- pm/common@21 -- # date +%s 00:05:03.954 00:27:40 -- pm/common@21 -- # date +%s 00:05:03.954 00:27:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732667260 00:05:03.954 00:27:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732667260 00:05:03.954 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732667260_collect-vmstat.pm.log 00:05:03.954 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732667260_collect-cpu-load.pm.log 00:05:04.896 00:27:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:04.896 00:27:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:04.896 00:27:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.896 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.896 00:27:41 -- spdk/autotest.sh@59 -- # create_test_list 00:05:04.896 00:27:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:04.896 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.896 00:27:41 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:04.896 00:27:41 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:04.896 00:27:41 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:04.896 00:27:41 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:04.896 00:27:41 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:04.896 00:27:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:04.896 00:27:41 -- common/autotest_common.sh@1457 -- # uname 00:05:04.896 00:27:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:04.896 00:27:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:04.896 00:27:41 -- common/autotest_common.sh@1477 -- # uname 00:05:04.896 00:27:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:04.896 00:27:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:04.896 00:27:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.157 lcov: LCOV version 1.15 00:05:05.157 00:27:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:20.075 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:20.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:38.184 00:28:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:38.184 00:28:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.184 00:28:12 -- common/autotest_common.sh@10 -- # set +x 00:05:38.184 00:28:12 -- spdk/autotest.sh@78 -- # rm -f 00:05:38.184 00:28:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.184 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:38.184 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:38.184 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:38.184 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:38.184 00:28:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:38.184 00:28:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:38.184 00:28:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:38.184 00:28:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:38.184 00:28:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:38.184 00:28:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:38.184 00:28:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:38.184 00:28:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:38.184 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.184 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.184 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:38.184 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:38.184 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.184 No valid GPT data, bailing 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.184 00:28:13 -- scripts/common.sh@395 -- # return 1 00:05:38.184 00:28:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.184 1+0 records in 00:05:38.184 1+0 records out 00:05:38.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268217 s, 39.1 MB/s 00:05:38.184 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.184 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.184 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:38.184 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:38.184 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:38.184 No valid GPT data, bailing 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.184 00:28:13 -- scripts/common.sh@395 -- # return 1 00:05:38.184 00:28:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:38.184 1+0 records in 00:05:38.184 1+0 records out 00:05:38.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551952 s, 190 MB/s 00:05:38.184 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.184 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.184 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:38.184 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:38.184 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:38.184 No valid GPT data, bailing 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.184 00:28:13 -- scripts/common.sh@395 -- # return 1 00:05:38.184 00:28:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:38.184 1+0 records in 00:05:38.184 1+0 records out 00:05:38.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616092 s, 170 MB/s 00:05:38.184 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.184 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.184 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:38.184 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:38.184 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:38.184 No valid GPT data, bailing 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:38.184 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.184 00:28:13 -- scripts/common.sh@395 -- # return 1 00:05:38.184 00:28:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:38.184 1+0 records in 00:05:38.184 1+0 records out 00:05:38.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564019 s, 186 MB/s 00:05:38.185 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.185 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.185 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:38.185 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:38.185 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:38.185 No valid GPT data, bailing 00:05:38.185 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:38.185 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.185 00:28:13 -- scripts/common.sh@395 -- # return 1 00:05:38.185 00:28:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:38.185 1+0 records in 00:05:38.185 1+0 records out 00:05:38.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582883 s, 180 MB/s 00:05:38.185 00:28:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.185 00:28:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.185 00:28:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:38.185 00:28:13 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:38.185 00:28:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:38.185 No valid GPT data, bailing 00:05:38.185 00:28:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:38.185 00:28:13 -- scripts/common.sh@394 -- # pt= 00:05:38.185 00:28:14 -- scripts/common.sh@395 -- # return 1 00:05:38.185 00:28:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:38.185 1+0 records in 00:05:38.185 1+0 records out 00:05:38.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726271 s, 144 MB/s 00:05:38.185 00:28:14 -- spdk/autotest.sh@105 -- # sync 00:05:38.185 00:28:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:38.185 00:28:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:38.185 00:28:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.139 00:28:15 -- spdk/autotest.sh@111 -- # uname -s 00:05:39.139 00:28:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:39.140 00:28:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:39.140 00:28:15 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:39.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.285 Hugepages 00:05:40.285 node hugesize free / total 00:05:40.285 node0 1048576kB 0 / 0 00:05:40.285 node0 2048kB 0 / 0 00:05:40.285 00:05:40.285 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.285 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:40.285 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:40.546 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:40.546 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:40.546 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:40.546 00:28:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:40.546 00:28:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:40.546 00:28:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:40.546 00:28:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.692 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.692 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.692 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.692 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.692 00:28:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:43.076 00:28:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:43.076 00:28:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:43.076 00:28:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.076 00:28:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:43.076 00:28:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:43.076 00:28:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:43.076 00:28:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.076 00:28:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.076 00:28:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:43.076 00:28:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:43.076 00:28:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:43.076 00:28:19 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.336 Waiting for block devices as requested 00:05:43.336 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.336 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.597 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:43.597 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:48.892 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:48.892 00:28:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:48.892 00:28:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1543 -- # continue 00:05:48.892 00:28:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:48.892 00:28:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1543 -- # continue 00:05:48.892 00:28:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:48.892 00:28:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1543 -- # continue 00:05:48.892 00:28:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:48.892 00:28:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:48.892 00:28:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:48.892 00:28:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:48.892 00:28:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:48.892 00:28:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:48.892 00:28:25 -- common/autotest_common.sh@1543 -- # continue 00:05:48.892 00:28:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:48.892 00:28:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.892 00:28:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 00:28:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:48.892 00:28:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.892 00:28:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.892 00:28:25 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.733 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.733 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.005 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.005 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.005 00:28:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.005 00:28:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:50.006 00:28:26 -- common/autotest_common.sh@10 -- # set +x 00:05:50.006 00:28:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.006 00:28:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:50.006 00:28:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.006 00:28:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:50.006 00:28:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:50.006 00:28:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:50.006 00:28:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.006 00:28:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:50.006 00:28:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:50.006 00:28:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:50.006 00:28:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.006 00:28:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.006 00:28:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:50.006 00:28:26 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:50.006 00:28:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:50.006 00:28:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.006 00:28:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.006 00:28:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.006 00:28:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.006 00:28:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.006 00:28:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.006 00:28:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:50.006 00:28:26 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:50.006 00:28:26 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.006 00:28:26 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:50.006 00:28:26 -- common/autotest_common.sh@1572 -- # return 0 00:05:50.006 00:28:26 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:50.006 00:28:26 -- common/autotest_common.sh@1580 -- # return 0 00:05:50.006 00:28:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:50.006 00:28:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:50.006 00:28:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.006 00:28:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.006 00:28:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:50.268 00:28:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.268 00:28:26 -- common/autotest_common.sh@10 -- # set +x 00:05:50.268 00:28:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:50.268 00:28:26 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.268 00:28:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.268 00:28:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.268 00:28:26 -- common/autotest_common.sh@10 -- # set +x 00:05:50.268 ************************************ 00:05:50.268 START TEST env 00:05:50.268 ************************************ 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.268 * Looking for test storage... 00:05:50.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.268 00:28:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.268 00:28:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.268 00:28:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.268 00:28:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.268 00:28:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.268 00:28:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.268 00:28:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.268 00:28:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.268 00:28:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.268 00:28:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.268 00:28:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.268 00:28:26 env -- scripts/common.sh@344 -- # case "$op" in 00:05:50.268 00:28:26 env -- scripts/common.sh@345 -- # : 1 00:05:50.268 00:28:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.268 00:28:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.268 00:28:26 env -- scripts/common.sh@365 -- # decimal 1 00:05:50.268 00:28:26 env -- scripts/common.sh@353 -- # local d=1 00:05:50.268 00:28:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.268 00:28:26 env -- scripts/common.sh@355 -- # echo 1 00:05:50.268 00:28:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.268 00:28:26 env -- scripts/common.sh@366 -- # decimal 2 00:05:50.268 00:28:26 env -- scripts/common.sh@353 -- # local d=2 00:05:50.268 00:28:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.268 00:28:26 env -- scripts/common.sh@355 -- # echo 2 00:05:50.268 00:28:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.268 00:28:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.268 00:28:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.268 00:28:26 env -- scripts/common.sh@368 -- # return 0 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.268 --rc genhtml_branch_coverage=1 00:05:50.268 --rc genhtml_function_coverage=1 00:05:50.268 --rc genhtml_legend=1 00:05:50.268 --rc geninfo_all_blocks=1 00:05:50.268 --rc geninfo_unexecuted_blocks=1 00:05:50.268 00:05:50.268 ' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.268 --rc genhtml_branch_coverage=1 00:05:50.268 --rc genhtml_function_coverage=1 00:05:50.268 --rc genhtml_legend=1 00:05:50.268 --rc geninfo_all_blocks=1 00:05:50.268 --rc geninfo_unexecuted_blocks=1 00:05:50.268 00:05:50.268 ' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.268 --rc genhtml_branch_coverage=1 00:05:50.268 --rc genhtml_function_coverage=1 00:05:50.268 --rc genhtml_legend=1 00:05:50.268 --rc geninfo_all_blocks=1 00:05:50.268 --rc geninfo_unexecuted_blocks=1 00:05:50.268 00:05:50.268 ' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.268 --rc genhtml_branch_coverage=1 00:05:50.268 --rc genhtml_function_coverage=1 00:05:50.268 --rc genhtml_legend=1 00:05:50.268 --rc geninfo_all_blocks=1 00:05:50.268 --rc geninfo_unexecuted_blocks=1 00:05:50.268 00:05:50.268 ' 00:05:50.268 00:28:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.268 00:28:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.268 00:28:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.268 ************************************ 00:05:50.268 START TEST env_memory 00:05:50.268 ************************************ 00:05:50.268 00:28:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:50.268 00:05:50.268 00:05:50.268 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.268 http://cunit.sourceforge.net/ 00:05:50.268 00:05:50.268 00:05:50.268 Suite: memory 00:05:50.531 Test: alloc and free memory map ...[2024-11-27 00:28:27.059153] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:50.531 passed 00:05:50.531 Test: mem map translation ...[2024-11-27 00:28:27.098282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:50.531 [2024-11-27 00:28:27.098341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:50.531 [2024-11-27 00:28:27.098403] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:50.531 [2024-11-27 00:28:27.098418] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:50.531 passed 00:05:50.531 Test: mem map registration ...[2024-11-27 00:28:27.166727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:50.531 [2024-11-27 00:28:27.166799] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:50.531 passed 00:05:50.531 Test: mem map adjacent registrations ...passed 00:05:50.531 00:05:50.531 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.531 suites 1 1 n/a 0 0 00:05:50.531 tests 4 4 4 0 0 00:05:50.531 asserts 152 152 152 0 n/a 00:05:50.531 00:05:50.531 Elapsed time = 0.234 seconds 00:05:50.531 00:05:50.531 real 0m0.272s 00:05:50.531 user 0m0.245s 00:05:50.531 sys 0m0.019s 00:05:50.531 00:28:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.531 ************************************ 00:05:50.531 END TEST env_memory 00:05:50.531 ************************************ 00:05:50.531 00:28:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:50.793 00:28:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.793 00:28:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.793 00:28:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.793 00:28:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.793 ************************************ 00:05:50.793 START TEST env_vtophys 00:05:50.793 ************************************ 00:05:50.793 00:28:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:50.793 EAL: lib.eal log level changed from notice to debug 00:05:50.793 EAL: Detected lcore 0 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 1 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 2 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 3 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 4 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 5 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 6 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 7 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 8 as core 0 on socket 0 00:05:50.793 EAL: Detected lcore 9 as core 0 on socket 0 00:05:50.793 EAL: Maximum logical cores by configuration: 128 00:05:50.793 EAL: Detected CPU lcores: 10 00:05:50.793 EAL: Detected NUMA nodes: 1 00:05:50.793 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:50.793 EAL: Detected shared linkage of DPDK 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:50.793 EAL: Registered [vdev] bus. 00:05:50.793 EAL: bus.vdev log level changed from disabled to notice 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:50.793 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:50.793 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:50.793 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:50.793 EAL: No shared files mode enabled, IPC will be disabled 00:05:50.793 EAL: No shared files mode enabled, IPC is disabled 00:05:50.793 EAL: Selected IOVA mode 'PA' 00:05:50.793 EAL: Probing VFIO support... 00:05:50.793 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.793 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:50.793 EAL: Ask a virtual area of 0x2e000 bytes 00:05:50.793 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:50.793 EAL: Setting up physically contiguous memory... 00:05:50.793 EAL: Setting maximum number of open files to 524288 00:05:50.793 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:50.793 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:50.793 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.793 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:50.793 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.793 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.793 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:50.793 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:50.793 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.793 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:50.793 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.793 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.793 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:50.793 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:50.793 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.793 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:50.793 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.793 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.793 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:50.793 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:50.793 EAL: Ask a virtual area of 0x61000 bytes 00:05:50.793 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:50.793 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:50.793 EAL: Ask a virtual area of 0x400000000 bytes 00:05:50.793 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:50.793 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:50.793 EAL: Hugepages will be freed exactly as allocated. 00:05:50.793 EAL: No shared files mode enabled, IPC is disabled 00:05:50.793 EAL: No shared files mode enabled, IPC is disabled 00:05:50.793 EAL: TSC frequency is ~2600000 KHz 00:05:50.793 EAL: Main lcore 0 is ready (tid=7f95644eaa40;cpuset=[0]) 00:05:50.793 EAL: Trying to obtain current memory policy. 00:05:50.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.793 EAL: Restoring previous memory policy: 0 00:05:50.793 EAL: request: mp_malloc_sync 00:05:50.793 EAL: No shared files mode enabled, IPC is disabled 00:05:50.793 EAL: Heap on socket 0 was expanded by 2MB 00:05:50.793 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:50.793 EAL: No shared files mode enabled, IPC is disabled 00:05:50.793 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:50.793 EAL: Mem event callback 'spdk:(nil)' registered 00:05:50.793 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:50.793 00:05:50.793 00:05:50.793 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.793 http://cunit.sourceforge.net/ 00:05:50.793 00:05:50.793 00:05:50.793 Suite: components_suite 00:05:51.366 Test: vtophys_malloc_test ...passed 00:05:51.366 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:51.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.366 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 4MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 4MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 6MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 6MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 10MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 10MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 18MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 18MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 34MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 34MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 66MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 66MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.367 EAL: Restoring previous memory policy: 4 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was expanded by 130MB 00:05:51.367 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.367 EAL: request: mp_malloc_sync 00:05:51.367 EAL: No shared files mode enabled, IPC is disabled 00:05:51.367 EAL: Heap on socket 0 was shrunk by 130MB 00:05:51.367 EAL: Trying to obtain current memory policy. 00:05:51.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.628 EAL: Restoring previous memory policy: 4 00:05:51.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.628 EAL: request: mp_malloc_sync 00:05:51.628 EAL: No shared files mode enabled, IPC is disabled 00:05:51.628 EAL: Heap on socket 0 was expanded by 258MB 00:05:51.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.628 EAL: request: mp_malloc_sync 00:05:51.628 EAL: No shared files mode enabled, IPC is disabled 00:05:51.628 EAL: Heap on socket 0 was shrunk by 258MB 00:05:51.628 EAL: Trying to obtain current memory policy. 00:05:51.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.628 EAL: Restoring previous memory policy: 4 00:05:51.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.628 EAL: request: mp_malloc_sync 00:05:51.628 EAL: No shared files mode enabled, IPC is disabled 00:05:51.628 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.628 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.889 EAL: request: mp_malloc_sync 00:05:51.889 EAL: No shared files mode enabled, IPC is disabled 00:05:51.889 EAL: Heap on socket 0 was shrunk by 514MB 00:05:51.889 EAL: Trying to obtain current memory policy. 00:05:51.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.889 EAL: Restoring previous memory policy: 4 00:05:51.889 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.889 EAL: request: mp_malloc_sync 00:05:51.889 EAL: No shared files mode enabled, IPC is disabled 00:05:51.889 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.150 EAL: request: mp_malloc_sync 00:05:52.150 passed 00:05:52.150 00:05:52.150 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.150 suites 1 1 n/a 0 0 00:05:52.150 tests 2 2 2 0 0 00:05:52.150 asserts 5358 5358 5358 0 n/a 00:05:52.150 00:05:52.150 Elapsed time = 1.339 seconds 00:05:52.150 EAL: No shared files mode enabled, IPC is disabled 00:05:52.150 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:52.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.150 EAL: request: mp_malloc_sync 00:05:52.150 EAL: No shared files mode enabled, IPC is disabled 00:05:52.150 EAL: Heap on socket 0 was shrunk by 2MB 00:05:52.150 EAL: No shared files mode enabled, IPC is disabled 00:05:52.150 EAL: No shared files mode enabled, IPC is disabled 00:05:52.150 EAL: No shared files mode enabled, IPC is disabled 00:05:52.151 00:05:52.151 real 0m1.593s 00:05:52.151 user 0m0.624s 00:05:52.151 sys 0m0.830s 00:05:52.151 ************************************ 00:05:52.151 END TEST env_vtophys 00:05:52.151 ************************************ 00:05:52.151 00:28:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.151 00:28:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:52.412 00:28:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.412 00:28:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.412 00:28:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.412 00:28:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.412 ************************************ 00:05:52.412 START TEST env_pci 00:05:52.412 ************************************ 00:05:52.412 00:28:28 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:52.412 00:05:52.412 00:05:52.412 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.412 http://cunit.sourceforge.net/ 00:05:52.412 00:05:52.412 00:05:52.412 Suite: pci 00:05:52.412 Test: pci_hook ...[2024-11-27 00:28:29.012293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69428 has claimed it 00:05:52.412 passed 00:05:52.412 00:05:52.412 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.412 suites 1 1 n/a 0 0 00:05:52.412 tests 1 1 1 0 0 00:05:52.412 asserts 25 25 25 0 n/a 00:05:52.412 00:05:52.412 Elapsed time = 0.004 seconds 00:05:52.412 EAL: Cannot find device (10000:00:01.0) 00:05:52.412 EAL: Failed to attach device on primary process 00:05:52.412 00:05:52.412 real 0m0.055s 00:05:52.412 user 0m0.028s 00:05:52.412 sys 0m0.026s 00:05:52.412 ************************************ 00:05:52.412 END TEST env_pci 00:05:52.412 ************************************ 00:05:52.413 00:28:29 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.413 00:28:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 00:28:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:52.413 00:28:29 env -- env/env.sh@15 -- # uname 00:05:52.413 00:28:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:52.413 00:28:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:52.413 00:28:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.413 00:28:29 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:52.413 00:28:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.413 00:28:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 ************************************ 00:05:52.413 START TEST env_dpdk_post_init 00:05:52.413 ************************************ 00:05:52.413 00:28:29 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:52.413 EAL: Detected CPU lcores: 10 00:05:52.413 EAL: Detected NUMA nodes: 1 00:05:52.413 EAL: Detected shared linkage of DPDK 00:05:52.413 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.413 EAL: Selected IOVA mode 'PA' 00:05:52.674 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:52.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:52.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:52.674 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:52.674 Starting DPDK initialization... 00:05:52.674 Starting SPDK post initialization... 00:05:52.674 SPDK NVMe probe 00:05:52.674 Attaching to 0000:00:10.0 00:05:52.674 Attaching to 0000:00:11.0 00:05:52.674 Attaching to 0000:00:12.0 00:05:52.674 Attaching to 0000:00:13.0 00:05:52.674 Attached to 0000:00:13.0 00:05:52.674 Attached to 0000:00:10.0 00:05:52.674 Attached to 0000:00:11.0 00:05:52.674 Attached to 0000:00:12.0 00:05:52.674 Cleaning up... 00:05:52.674 00:05:52.674 real 0m0.249s 00:05:52.674 user 0m0.074s 00:05:52.674 sys 0m0.077s 00:05:52.674 ************************************ 00:05:52.674 END TEST env_dpdk_post_init 00:05:52.674 ************************************ 00:05:52.674 00:28:29 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.674 00:28:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.674 00:28:29 env -- env/env.sh@26 -- # uname 00:05:52.674 00:28:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:52.674 00:28:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.674 00:28:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.674 00:28:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.674 00:28:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.674 ************************************ 00:05:52.674 START TEST env_mem_callbacks 00:05:52.674 ************************************ 00:05:52.674 00:28:29 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:52.935 EAL: Detected CPU lcores: 10 00:05:52.935 EAL: Detected NUMA nodes: 1 00:05:52.935 EAL: Detected shared linkage of DPDK 00:05:52.935 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:52.935 EAL: Selected IOVA mode 'PA' 00:05:52.935 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:52.935 00:05:52.935 00:05:52.935 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.935 http://cunit.sourceforge.net/ 00:05:52.935 00:05:52.935 00:05:52.935 Suite: memory 00:05:52.935 Test: test ... 00:05:52.935 register 0x200000200000 2097152 00:05:52.935 malloc 3145728 00:05:52.935 register 0x200000400000 4194304 00:05:52.935 buf 0x200000500000 len 3145728 PASSED 00:05:52.935 malloc 64 00:05:52.935 buf 0x2000004fff40 len 64 PASSED 00:05:52.935 malloc 4194304 00:05:52.935 register 0x200000800000 6291456 00:05:52.935 buf 0x200000a00000 len 4194304 PASSED 00:05:52.935 free 0x200000500000 3145728 00:05:52.935 free 0x2000004fff40 64 00:05:52.935 unregister 0x200000400000 4194304 PASSED 00:05:52.935 free 0x200000a00000 4194304 00:05:52.935 unregister 0x200000800000 6291456 PASSED 00:05:52.935 malloc 8388608 00:05:52.935 register 0x200000400000 10485760 00:05:52.935 buf 0x200000600000 len 8388608 PASSED 00:05:52.935 free 0x200000600000 8388608 00:05:52.935 unregister 0x200000400000 10485760 PASSED 00:05:52.935 passed 00:05:52.935 00:05:52.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.935 suites 1 1 n/a 0 0 00:05:52.935 tests 1 1 1 0 0 00:05:52.935 asserts 15 15 15 0 n/a 00:05:52.935 00:05:52.935 Elapsed time = 0.013 seconds 00:05:52.935 00:05:52.935 real 0m0.179s 00:05:52.935 user 0m0.022s 00:05:52.935 sys 0m0.055s 00:05:52.935 00:28:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.935 ************************************ 00:05:52.935 END TEST env_mem_callbacks 00:05:52.935 ************************************ 00:05:52.935 00:28:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 00:05:52.935 real 0m2.855s 00:05:52.935 user 0m1.165s 00:05:52.935 sys 0m1.245s 00:05:52.935 00:28:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.935 00:28:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 ************************************ 00:05:52.935 END TEST env 00:05:52.935 ************************************ 00:05:53.196 00:28:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.196 00:28:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.196 00:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.196 00:28:29 -- common/autotest_common.sh@10 -- # set +x 00:05:53.196 ************************************ 00:05:53.196 START TEST rpc 00:05:53.196 ************************************ 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.196 * Looking for test storage... 00:05:53.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.196 00:28:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.196 00:28:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.196 00:28:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.196 00:28:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.196 00:28:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.196 00:28:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.196 00:28:29 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.196 00:28:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.196 00:28:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.196 00:28:29 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.196 00:28:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.196 00:28:29 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.196 00:28:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.196 00:28:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.196 00:28:29 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.196 --rc genhtml_branch_coverage=1 00:05:53.196 --rc genhtml_function_coverage=1 00:05:53.196 --rc genhtml_legend=1 00:05:53.196 --rc geninfo_all_blocks=1 00:05:53.196 --rc geninfo_unexecuted_blocks=1 00:05:53.196 00:05:53.196 ' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.196 --rc genhtml_branch_coverage=1 00:05:53.196 --rc genhtml_function_coverage=1 00:05:53.196 --rc genhtml_legend=1 00:05:53.196 --rc geninfo_all_blocks=1 00:05:53.196 --rc geninfo_unexecuted_blocks=1 00:05:53.196 00:05:53.196 ' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.196 --rc genhtml_branch_coverage=1 00:05:53.196 --rc genhtml_function_coverage=1 00:05:53.196 --rc genhtml_legend=1 00:05:53.196 --rc geninfo_all_blocks=1 00:05:53.196 --rc geninfo_unexecuted_blocks=1 00:05:53.196 00:05:53.196 ' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.196 --rc genhtml_branch_coverage=1 00:05:53.196 --rc genhtml_function_coverage=1 00:05:53.196 --rc genhtml_legend=1 00:05:53.196 --rc geninfo_all_blocks=1 00:05:53.196 --rc geninfo_unexecuted_blocks=1 00:05:53.196 00:05:53.196 ' 00:05:53.196 00:28:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69555 00:05:53.196 00:28:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.196 00:28:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69555 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 69555 ']' 00:05:53.196 00:28:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.197 00:28:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.197 00:28:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.197 00:28:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.197 00:28:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.197 00:28:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.458 [2024-11-27 00:28:29.997948] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:05:53.458 [2024-11-27 00:28:29.998127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69555 ] 00:05:53.458 [2024-11-27 00:28:30.169368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.458 [2024-11-27 00:28:30.210408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:53.458 [2024-11-27 00:28:30.210478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69555' to capture a snapshot of events at runtime. 00:05:53.458 [2024-11-27 00:28:30.210493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.458 [2024-11-27 00:28:30.210504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.458 [2024-11-27 00:28:30.210516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69555 for offline analysis/debug. 00:05:53.458 [2024-11-27 00:28:30.211009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.401 00:28:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.401 00:28:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.401 00:28:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.401 00:28:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.401 00:28:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.401 00:28:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.401 00:28:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.401 00:28:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.401 00:28:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.401 ************************************ 00:05:54.401 START TEST rpc_integrity 00:05:54.401 ************************************ 00:05:54.401 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:54.401 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.401 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.401 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.401 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.401 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.401 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.401 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.401 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.402 { 00:05:54.402 "name": "Malloc0", 00:05:54.402 "aliases": [ 00:05:54.402 "eb9e62cd-5956-431a-9bfb-7ef73fe7435a" 00:05:54.402 ], 00:05:54.402 "product_name": "Malloc disk", 00:05:54.402 "block_size": 512, 00:05:54.402 "num_blocks": 16384, 00:05:54.402 "uuid": "eb9e62cd-5956-431a-9bfb-7ef73fe7435a", 00:05:54.402 "assigned_rate_limits": { 00:05:54.402 "rw_ios_per_sec": 0, 00:05:54.402 "rw_mbytes_per_sec": 0, 00:05:54.402 "r_mbytes_per_sec": 0, 00:05:54.402 "w_mbytes_per_sec": 0 00:05:54.402 }, 00:05:54.402 "claimed": false, 00:05:54.402 "zoned": false, 00:05:54.402 "supported_io_types": { 00:05:54.402 "read": true, 00:05:54.402 "write": true, 00:05:54.402 "unmap": true, 00:05:54.402 "flush": true, 00:05:54.402 "reset": true, 00:05:54.402 "nvme_admin": false, 00:05:54.402 "nvme_io": false, 00:05:54.402 "nvme_io_md": false, 00:05:54.402 "write_zeroes": true, 00:05:54.402 "zcopy": true, 00:05:54.402 "get_zone_info": false, 00:05:54.402 "zone_management": false, 00:05:54.402 "zone_append": false, 00:05:54.402 "compare": false, 00:05:54.402 "compare_and_write": false, 00:05:54.402 "abort": true, 00:05:54.402 "seek_hole": false, 00:05:54.402 "seek_data": false, 00:05:54.402 "copy": true, 00:05:54.402 "nvme_iov_md": false 00:05:54.402 }, 00:05:54.402 "memory_domains": [ 00:05:54.402 { 00:05:54.402 "dma_device_id": "system", 00:05:54.402 "dma_device_type": 1 00:05:54.402 }, 00:05:54.402 { 00:05:54.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.402 "dma_device_type": 2 00:05:54.402 } 00:05:54.402 ], 00:05:54.402 "driver_specific": {} 00:05:54.402 } 00:05:54.402 ]' 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 [2024-11-27 00:28:30.940163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.402 [2024-11-27 00:28:30.940243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.402 [2024-11-27 00:28:30.940281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:54.402 [2024-11-27 00:28:30.940299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.402 [2024-11-27 00:28:30.943224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.402 [2024-11-27 00:28:30.943279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.402 Passthru0 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.402 { 00:05:54.402 "name": "Malloc0", 00:05:54.402 "aliases": [ 00:05:54.402 "eb9e62cd-5956-431a-9bfb-7ef73fe7435a" 00:05:54.402 ], 00:05:54.402 "product_name": "Malloc disk", 00:05:54.402 "block_size": 512, 00:05:54.402 "num_blocks": 16384, 00:05:54.402 "uuid": "eb9e62cd-5956-431a-9bfb-7ef73fe7435a", 00:05:54.402 "assigned_rate_limits": { 00:05:54.402 "rw_ios_per_sec": 0, 00:05:54.402 "rw_mbytes_per_sec": 0, 00:05:54.402 "r_mbytes_per_sec": 0, 00:05:54.402 "w_mbytes_per_sec": 0 00:05:54.402 }, 00:05:54.402 "claimed": true, 00:05:54.402 "claim_type": "exclusive_write", 00:05:54.402 "zoned": false, 00:05:54.402 "supported_io_types": { 00:05:54.402 "read": true, 00:05:54.402 "write": true, 00:05:54.402 "unmap": true, 00:05:54.402 "flush": true, 00:05:54.402 "reset": true, 00:05:54.402 "nvme_admin": false, 00:05:54.402 "nvme_io": false, 00:05:54.402 "nvme_io_md": false, 00:05:54.402 "write_zeroes": true, 00:05:54.402 "zcopy": true, 00:05:54.402 "get_zone_info": false, 00:05:54.402 "zone_management": false, 00:05:54.402 "zone_append": false, 00:05:54.402 "compare": false, 00:05:54.402 "compare_and_write": false, 00:05:54.402 "abort": true, 00:05:54.402 "seek_hole": false, 00:05:54.402 "seek_data": false, 00:05:54.402 "copy": true, 00:05:54.402 "nvme_iov_md": false 00:05:54.402 }, 00:05:54.402 "memory_domains": [ 00:05:54.402 { 00:05:54.402 "dma_device_id": "system", 00:05:54.402 "dma_device_type": 1 00:05:54.402 }, 00:05:54.402 { 00:05:54.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.402 "dma_device_type": 2 00:05:54.402 } 00:05:54.402 ], 00:05:54.402 "driver_specific": {} 00:05:54.402 }, 00:05:54.402 { 00:05:54.402 "name": "Passthru0", 00:05:54.402 "aliases": [ 00:05:54.402 "8b808a33-128a-50b1-9b42-547724e72e65" 00:05:54.402 ], 00:05:54.402 "product_name": "passthru", 00:05:54.402 "block_size": 512, 00:05:54.402 "num_blocks": 16384, 00:05:54.402 "uuid": "8b808a33-128a-50b1-9b42-547724e72e65", 00:05:54.402 "assigned_rate_limits": { 00:05:54.402 "rw_ios_per_sec": 0, 00:05:54.402 "rw_mbytes_per_sec": 0, 00:05:54.402 "r_mbytes_per_sec": 0, 00:05:54.402 "w_mbytes_per_sec": 0 00:05:54.402 }, 00:05:54.402 "claimed": false, 00:05:54.402 "zoned": false, 00:05:54.402 "supported_io_types": { 00:05:54.402 "read": true, 00:05:54.402 "write": true, 00:05:54.402 "unmap": true, 00:05:54.402 "flush": true, 00:05:54.402 "reset": true, 00:05:54.402 "nvme_admin": false, 00:05:54.402 "nvme_io": false, 00:05:54.402 "nvme_io_md": false, 00:05:54.402 "write_zeroes": true, 00:05:54.402 "zcopy": true, 00:05:54.402 "get_zone_info": false, 00:05:54.402 "zone_management": false, 00:05:54.402 "zone_append": false, 00:05:54.402 "compare": false, 00:05:54.402 "compare_and_write": false, 00:05:54.402 "abort": true, 00:05:54.402 "seek_hole": false, 00:05:54.402 "seek_data": false, 00:05:54.402 "copy": true, 00:05:54.402 "nvme_iov_md": false 00:05:54.402 }, 00:05:54.402 "memory_domains": [ 00:05:54.402 { 00:05:54.402 "dma_device_id": "system", 00:05:54.402 "dma_device_type": 1 00:05:54.402 }, 00:05:54.402 { 00:05:54.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.402 "dma_device_type": 2 00:05:54.402 } 00:05:54.402 ], 00:05:54.402 "driver_specific": { 00:05:54.402 "passthru": { 00:05:54.402 "name": "Passthru0", 00:05:54.402 "base_bdev_name": "Malloc0" 00:05:54.402 } 00:05:54.402 } 00:05:54.402 } 00:05:54.402 ]' 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.402 00:28:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.402 00:28:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.402 00:28:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.402 00:28:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.402 00:05:54.402 real 0m0.222s 00:05:54.402 user 0m0.114s 00:05:54.402 sys 0m0.040s 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.402 00:28:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 ************************************ 00:05:54.402 END TEST rpc_integrity 00:05:54.402 ************************************ 00:05:54.402 00:28:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.402 00:28:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.402 00:28:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.402 00:28:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.402 ************************************ 00:05:54.402 START TEST rpc_plugins 00:05:54.402 ************************************ 00:05:54.402 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:54.402 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.402 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.403 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.403 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.403 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.403 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.403 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.403 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.403 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.403 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.403 { 00:05:54.403 "name": "Malloc1", 00:05:54.403 "aliases": [ 00:05:54.403 "bff39681-55ed-4b69-858f-bb202facd605" 00:05:54.403 ], 00:05:54.403 "product_name": "Malloc disk", 00:05:54.403 "block_size": 4096, 00:05:54.403 "num_blocks": 256, 00:05:54.403 "uuid": "bff39681-55ed-4b69-858f-bb202facd605", 00:05:54.403 "assigned_rate_limits": { 00:05:54.403 "rw_ios_per_sec": 0, 00:05:54.403 "rw_mbytes_per_sec": 0, 00:05:54.403 "r_mbytes_per_sec": 0, 00:05:54.403 "w_mbytes_per_sec": 0 00:05:54.403 }, 00:05:54.403 "claimed": false, 00:05:54.403 "zoned": false, 00:05:54.403 "supported_io_types": { 00:05:54.403 "read": true, 00:05:54.403 "write": true, 00:05:54.403 "unmap": true, 00:05:54.403 "flush": true, 00:05:54.403 "reset": true, 00:05:54.403 "nvme_admin": false, 00:05:54.403 "nvme_io": false, 00:05:54.403 "nvme_io_md": false, 00:05:54.403 "write_zeroes": true, 00:05:54.403 "zcopy": true, 00:05:54.403 "get_zone_info": false, 00:05:54.403 "zone_management": false, 00:05:54.403 "zone_append": false, 00:05:54.403 "compare": false, 00:05:54.403 "compare_and_write": false, 00:05:54.403 "abort": true, 00:05:54.403 "seek_hole": false, 00:05:54.403 "seek_data": false, 00:05:54.403 "copy": true, 00:05:54.403 "nvme_iov_md": false 00:05:54.403 }, 00:05:54.403 "memory_domains": [ 00:05:54.403 { 00:05:54.403 "dma_device_id": "system", 00:05:54.403 "dma_device_type": 1 00:05:54.403 }, 00:05:54.403 { 00:05:54.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.403 "dma_device_type": 2 00:05:54.403 } 00:05:54.403 ], 00:05:54.403 "driver_specific": {} 00:05:54.403 } 00:05:54.403 ]' 00:05:54.403 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.665 00:28:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.665 00:05:54.665 real 0m0.126s 00:05:54.665 user 0m0.063s 00:05:54.665 sys 0m0.017s 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.665 00:28:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 ************************************ 00:05:54.665 END TEST rpc_plugins 00:05:54.665 ************************************ 00:05:54.665 00:28:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.665 00:28:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.665 00:28:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.665 00:28:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 ************************************ 00:05:54.665 START TEST rpc_trace_cmd_test 00:05:54.665 ************************************ 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.665 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69555", 00:05:54.665 "tpoint_group_mask": "0x8", 00:05:54.665 "iscsi_conn": { 00:05:54.665 "mask": "0x2", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "scsi": { 00:05:54.665 "mask": "0x4", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "bdev": { 00:05:54.665 "mask": "0x8", 00:05:54.665 "tpoint_mask": "0xffffffffffffffff" 00:05:54.665 }, 00:05:54.665 "nvmf_rdma": { 00:05:54.665 "mask": "0x10", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "nvmf_tcp": { 00:05:54.665 "mask": "0x20", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "ftl": { 00:05:54.665 "mask": "0x40", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "blobfs": { 00:05:54.665 "mask": "0x80", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "dsa": { 00:05:54.665 "mask": "0x200", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "thread": { 00:05:54.665 "mask": "0x400", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "nvme_pcie": { 00:05:54.665 "mask": "0x800", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "iaa": { 00:05:54.665 "mask": "0x1000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "nvme_tcp": { 00:05:54.665 "mask": "0x2000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "bdev_nvme": { 00:05:54.665 "mask": "0x4000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "sock": { 00:05:54.665 "mask": "0x8000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "blob": { 00:05:54.665 "mask": "0x10000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "bdev_raid": { 00:05:54.665 "mask": "0x20000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 }, 00:05:54.665 "scheduler": { 00:05:54.665 "mask": "0x40000", 00:05:54.665 "tpoint_mask": "0x0" 00:05:54.665 } 00:05:54.665 }' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:54.665 00:05:54.665 real 0m0.159s 00:05:54.665 user 0m0.129s 00:05:54.665 sys 0m0.019s 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.665 00:28:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.665 ************************************ 00:05:54.665 END TEST rpc_trace_cmd_test 00:05:54.665 ************************************ 00:05:54.928 00:28:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:54.928 00:28:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:54.928 00:28:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:54.928 00:28:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.928 00:28:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.928 00:28:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 ************************************ 00:05:54.928 START TEST rpc_daemon_integrity 00:05:54.928 ************************************ 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.928 { 00:05:54.928 "name": "Malloc2", 00:05:54.928 "aliases": [ 00:05:54.928 "4314df99-2fe9-46e8-9f7e-a9f2a38c3ee1" 00:05:54.928 ], 00:05:54.928 "product_name": "Malloc disk", 00:05:54.928 "block_size": 512, 00:05:54.928 "num_blocks": 16384, 00:05:54.928 "uuid": "4314df99-2fe9-46e8-9f7e-a9f2a38c3ee1", 00:05:54.928 "assigned_rate_limits": { 00:05:54.928 "rw_ios_per_sec": 0, 00:05:54.928 "rw_mbytes_per_sec": 0, 00:05:54.928 "r_mbytes_per_sec": 0, 00:05:54.928 "w_mbytes_per_sec": 0 00:05:54.928 }, 00:05:54.928 "claimed": false, 00:05:54.928 "zoned": false, 00:05:54.928 "supported_io_types": { 00:05:54.928 "read": true, 00:05:54.928 "write": true, 00:05:54.928 "unmap": true, 00:05:54.928 "flush": true, 00:05:54.928 "reset": true, 00:05:54.928 "nvme_admin": false, 00:05:54.928 "nvme_io": false, 00:05:54.928 "nvme_io_md": false, 00:05:54.928 "write_zeroes": true, 00:05:54.928 "zcopy": true, 00:05:54.928 "get_zone_info": false, 00:05:54.928 "zone_management": false, 00:05:54.928 "zone_append": false, 00:05:54.928 "compare": false, 00:05:54.928 "compare_and_write": false, 00:05:54.928 "abort": true, 00:05:54.928 "seek_hole": false, 00:05:54.928 "seek_data": false, 00:05:54.928 "copy": true, 00:05:54.928 "nvme_iov_md": false 00:05:54.928 }, 00:05:54.928 "memory_domains": [ 00:05:54.928 { 00:05:54.928 "dma_device_id": "system", 00:05:54.928 "dma_device_type": 1 00:05:54.928 }, 00:05:54.928 { 00:05:54.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.928 "dma_device_type": 2 00:05:54.928 } 00:05:54.928 ], 00:05:54.928 "driver_specific": {} 00:05:54.928 } 00:05:54.928 ]' 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 [2024-11-27 00:28:31.622736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:54.928 [2024-11-27 00:28:31.622812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.928 [2024-11-27 00:28:31.622849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:54.928 [2024-11-27 00:28:31.622877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.928 [2024-11-27 00:28:31.625628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.928 [2024-11-27 00:28:31.625678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.928 Passthru0 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.928 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.928 { 00:05:54.928 "name": "Malloc2", 00:05:54.928 "aliases": [ 00:05:54.928 "4314df99-2fe9-46e8-9f7e-a9f2a38c3ee1" 00:05:54.928 ], 00:05:54.928 "product_name": "Malloc disk", 00:05:54.928 "block_size": 512, 00:05:54.928 "num_blocks": 16384, 00:05:54.928 "uuid": "4314df99-2fe9-46e8-9f7e-a9f2a38c3ee1", 00:05:54.928 "assigned_rate_limits": { 00:05:54.928 "rw_ios_per_sec": 0, 00:05:54.928 "rw_mbytes_per_sec": 0, 00:05:54.928 "r_mbytes_per_sec": 0, 00:05:54.928 "w_mbytes_per_sec": 0 00:05:54.928 }, 00:05:54.928 "claimed": true, 00:05:54.928 "claim_type": "exclusive_write", 00:05:54.928 "zoned": false, 00:05:54.928 "supported_io_types": { 00:05:54.928 "read": true, 00:05:54.928 "write": true, 00:05:54.928 "unmap": true, 00:05:54.928 "flush": true, 00:05:54.928 "reset": true, 00:05:54.928 "nvme_admin": false, 00:05:54.928 "nvme_io": false, 00:05:54.928 "nvme_io_md": false, 00:05:54.928 "write_zeroes": true, 00:05:54.928 "zcopy": true, 00:05:54.928 "get_zone_info": false, 00:05:54.928 "zone_management": false, 00:05:54.928 "zone_append": false, 00:05:54.928 "compare": false, 00:05:54.928 "compare_and_write": false, 00:05:54.928 "abort": true, 00:05:54.928 "seek_hole": false, 00:05:54.928 "seek_data": false, 00:05:54.928 "copy": true, 00:05:54.928 "nvme_iov_md": false 00:05:54.928 }, 00:05:54.928 "memory_domains": [ 00:05:54.928 { 00:05:54.928 "dma_device_id": "system", 00:05:54.928 "dma_device_type": 1 00:05:54.928 }, 00:05:54.928 { 00:05:54.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.928 "dma_device_type": 2 00:05:54.928 } 00:05:54.929 ], 00:05:54.929 "driver_specific": {} 00:05:54.929 }, 00:05:54.929 { 00:05:54.929 "name": "Passthru0", 00:05:54.929 "aliases": [ 00:05:54.929 "b0776849-d15c-5090-9905-8301af613ad4" 00:05:54.929 ], 00:05:54.929 "product_name": "passthru", 00:05:54.929 "block_size": 512, 00:05:54.929 "num_blocks": 16384, 00:05:54.929 "uuid": "b0776849-d15c-5090-9905-8301af613ad4", 00:05:54.929 "assigned_rate_limits": { 00:05:54.929 "rw_ios_per_sec": 0, 00:05:54.929 "rw_mbytes_per_sec": 0, 00:05:54.929 "r_mbytes_per_sec": 0, 00:05:54.929 "w_mbytes_per_sec": 0 00:05:54.929 }, 00:05:54.929 "claimed": false, 00:05:54.929 "zoned": false, 00:05:54.929 "supported_io_types": { 00:05:54.929 "read": true, 00:05:54.929 "write": true, 00:05:54.929 "unmap": true, 00:05:54.929 "flush": true, 00:05:54.929 "reset": true, 00:05:54.929 "nvme_admin": false, 00:05:54.929 "nvme_io": false, 00:05:54.929 "nvme_io_md": false, 00:05:54.929 "write_zeroes": true, 00:05:54.929 "zcopy": true, 00:05:54.929 "get_zone_info": false, 00:05:54.929 "zone_management": false, 00:05:54.929 "zone_append": false, 00:05:54.929 "compare": false, 00:05:54.929 "compare_and_write": false, 00:05:54.929 "abort": true, 00:05:54.929 "seek_hole": false, 00:05:54.929 "seek_data": false, 00:05:54.929 "copy": true, 00:05:54.929 "nvme_iov_md": false 00:05:54.929 }, 00:05:54.929 "memory_domains": [ 00:05:54.929 { 00:05:54.929 "dma_device_id": "system", 00:05:54.929 "dma_device_type": 1 00:05:54.929 }, 00:05:54.929 { 00:05:54.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.929 "dma_device_type": 2 00:05:54.929 } 00:05:54.929 ], 00:05:54.929 "driver_specific": { 00:05:54.929 "passthru": { 00:05:54.929 "name": "Passthru0", 00:05:54.929 "base_bdev_name": "Malloc2" 00:05:54.929 } 00:05:54.929 } 00:05:54.929 } 00:05:54.929 ]' 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.929 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.190 00:28:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.190 00:05:55.190 real 0m0.229s 00:05:55.190 user 0m0.129s 00:05:55.190 sys 0m0.035s 00:05:55.190 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.190 00:28:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.190 ************************************ 00:05:55.190 END TEST rpc_daemon_integrity 00:05:55.190 ************************************ 00:05:55.190 00:28:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:55.190 00:28:31 rpc -- rpc/rpc.sh@84 -- # killprocess 69555 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 69555 ']' 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@958 -- # kill -0 69555 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69555 00:05:55.190 00:28:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.190 killing process with pid 69555 00:05:55.191 00:28:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.191 00:28:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69555' 00:05:55.191 00:28:31 rpc -- common/autotest_common.sh@973 -- # kill 69555 00:05:55.191 00:28:31 rpc -- common/autotest_common.sh@978 -- # wait 69555 00:05:55.762 00:05:55.762 real 0m2.514s 00:05:55.762 user 0m2.766s 00:05:55.762 sys 0m0.773s 00:05:55.762 00:28:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.762 ************************************ 00:05:55.762 END TEST rpc 00:05:55.762 ************************************ 00:05:55.762 00:28:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.762 00:28:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.762 00:28:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.762 00:28:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.762 00:28:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.762 ************************************ 00:05:55.762 START TEST skip_rpc 00:05:55.762 ************************************ 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.762 * Looking for test storage... 00:05:55.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.762 00:28:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.762 --rc genhtml_branch_coverage=1 00:05:55.762 --rc genhtml_function_coverage=1 00:05:55.762 --rc genhtml_legend=1 00:05:55.762 --rc geninfo_all_blocks=1 00:05:55.762 --rc geninfo_unexecuted_blocks=1 00:05:55.762 00:05:55.762 ' 00:05:55.762 00:28:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.762 00:28:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:55.762 00:28:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.762 00:28:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.762 ************************************ 00:05:55.762 START TEST skip_rpc 00:05:55.762 ************************************ 00:05:55.762 00:28:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:55.762 00:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69756 00:05:55.762 00:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.762 00:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:55.762 00:28:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:56.024 [2024-11-27 00:28:32.586568] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:05:56.024 [2024-11-27 00:28:32.586737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69756 ] 00:05:56.024 [2024-11-27 00:28:32.756216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.024 [2024-11-27 00:28:32.799976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69756 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69756 ']' 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69756 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69756 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.308 killing process with pid 69756 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69756' 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69756 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69756 00:06:01.308 00:06:01.308 real 0m5.338s 00:06:01.308 user 0m4.797s 00:06:01.308 sys 0m0.422s 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.308 ************************************ 00:06:01.308 END TEST skip_rpc 00:06:01.308 ************************************ 00:06:01.308 00:28:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.308 00:28:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.308 00:28:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.308 00:28:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.308 00:28:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.308 ************************************ 00:06:01.308 START TEST skip_rpc_with_json 00:06:01.308 ************************************ 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69838 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69838 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69838 ']' 00:06:01.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.308 00:28:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.309 00:28:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.309 [2024-11-27 00:28:37.962584] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:01.309 [2024-11-27 00:28:37.962704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69838 ] 00:06:01.570 [2024-11-27 00:28:38.130265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.570 [2024-11-27 00:28:38.170169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.141 [2024-11-27 00:28:38.826632] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.141 request: 00:06:02.141 { 00:06:02.141 "trtype": "tcp", 00:06:02.141 "method": "nvmf_get_transports", 00:06:02.141 "req_id": 1 00:06:02.141 } 00:06:02.141 Got JSON-RPC error response 00:06:02.141 response: 00:06:02.141 { 00:06:02.141 "code": -19, 00:06:02.141 "message": "No such device" 00:06:02.141 } 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.141 [2024-11-27 00:28:38.838762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.141 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.403 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.403 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.403 { 00:06:02.403 "subsystems": [ 00:06:02.403 { 00:06:02.403 "subsystem": "fsdev", 00:06:02.403 "config": [ 00:06:02.403 { 00:06:02.403 "method": "fsdev_set_opts", 00:06:02.403 "params": { 00:06:02.403 "fsdev_io_pool_size": 65535, 00:06:02.403 "fsdev_io_cache_size": 256 00:06:02.403 } 00:06:02.403 } 00:06:02.403 ] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "keyring", 00:06:02.403 "config": [] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "iobuf", 00:06:02.403 "config": [ 00:06:02.403 { 00:06:02.403 "method": "iobuf_set_options", 00:06:02.403 "params": { 00:06:02.403 "small_pool_count": 8192, 00:06:02.403 "large_pool_count": 1024, 00:06:02.403 "small_bufsize": 8192, 00:06:02.403 "large_bufsize": 135168, 00:06:02.403 "enable_numa": false 00:06:02.403 } 00:06:02.403 } 00:06:02.403 ] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "sock", 00:06:02.403 "config": [ 00:06:02.403 { 00:06:02.403 "method": "sock_set_default_impl", 00:06:02.403 "params": { 00:06:02.403 "impl_name": "posix" 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "sock_impl_set_options", 00:06:02.403 "params": { 00:06:02.403 "impl_name": "ssl", 00:06:02.403 "recv_buf_size": 4096, 00:06:02.403 "send_buf_size": 4096, 00:06:02.403 "enable_recv_pipe": true, 00:06:02.403 "enable_quickack": false, 00:06:02.403 "enable_placement_id": 0, 00:06:02.403 "enable_zerocopy_send_server": true, 00:06:02.403 "enable_zerocopy_send_client": false, 00:06:02.403 "zerocopy_threshold": 0, 00:06:02.403 "tls_version": 0, 00:06:02.403 "enable_ktls": false 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "sock_impl_set_options", 00:06:02.403 "params": { 00:06:02.403 "impl_name": "posix", 00:06:02.403 "recv_buf_size": 2097152, 00:06:02.403 "send_buf_size": 2097152, 00:06:02.403 "enable_recv_pipe": true, 00:06:02.403 "enable_quickack": false, 00:06:02.403 "enable_placement_id": 0, 00:06:02.403 "enable_zerocopy_send_server": true, 00:06:02.403 "enable_zerocopy_send_client": false, 00:06:02.403 "zerocopy_threshold": 0, 00:06:02.403 "tls_version": 0, 00:06:02.403 "enable_ktls": false 00:06:02.403 } 00:06:02.403 } 00:06:02.403 ] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "vmd", 00:06:02.403 "config": [] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "accel", 00:06:02.403 "config": [ 00:06:02.403 { 00:06:02.403 "method": "accel_set_options", 00:06:02.403 "params": { 00:06:02.403 "small_cache_size": 128, 00:06:02.403 "large_cache_size": 16, 00:06:02.403 "task_count": 2048, 00:06:02.403 "sequence_count": 2048, 00:06:02.403 "buf_count": 2048 00:06:02.403 } 00:06:02.403 } 00:06:02.403 ] 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "subsystem": "bdev", 00:06:02.403 "config": [ 00:06:02.403 { 00:06:02.403 "method": "bdev_set_options", 00:06:02.403 "params": { 00:06:02.403 "bdev_io_pool_size": 65535, 00:06:02.403 "bdev_io_cache_size": 256, 00:06:02.403 "bdev_auto_examine": true, 00:06:02.403 "iobuf_small_cache_size": 128, 00:06:02.403 "iobuf_large_cache_size": 16 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "bdev_raid_set_options", 00:06:02.403 "params": { 00:06:02.403 "process_window_size_kb": 1024, 00:06:02.403 "process_max_bandwidth_mb_sec": 0 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "bdev_iscsi_set_options", 00:06:02.403 "params": { 00:06:02.403 "timeout_sec": 30 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "bdev_nvme_set_options", 00:06:02.403 "params": { 00:06:02.403 "action_on_timeout": "none", 00:06:02.403 "timeout_us": 0, 00:06:02.403 "timeout_admin_us": 0, 00:06:02.403 "keep_alive_timeout_ms": 10000, 00:06:02.403 "arbitration_burst": 0, 00:06:02.403 "low_priority_weight": 0, 00:06:02.403 "medium_priority_weight": 0, 00:06:02.403 "high_priority_weight": 0, 00:06:02.403 "nvme_adminq_poll_period_us": 10000, 00:06:02.403 "nvme_ioq_poll_period_us": 0, 00:06:02.403 "io_queue_requests": 0, 00:06:02.403 "delay_cmd_submit": true, 00:06:02.403 "transport_retry_count": 4, 00:06:02.403 "bdev_retry_count": 3, 00:06:02.403 "transport_ack_timeout": 0, 00:06:02.403 "ctrlr_loss_timeout_sec": 0, 00:06:02.403 "reconnect_delay_sec": 0, 00:06:02.403 "fast_io_fail_timeout_sec": 0, 00:06:02.403 "disable_auto_failback": false, 00:06:02.403 "generate_uuids": false, 00:06:02.403 "transport_tos": 0, 00:06:02.403 "nvme_error_stat": false, 00:06:02.403 "rdma_srq_size": 0, 00:06:02.403 "io_path_stat": false, 00:06:02.403 "allow_accel_sequence": false, 00:06:02.403 "rdma_max_cq_size": 0, 00:06:02.403 "rdma_cm_event_timeout_ms": 0, 00:06:02.403 "dhchap_digests": [ 00:06:02.403 "sha256", 00:06:02.403 "sha384", 00:06:02.403 "sha512" 00:06:02.403 ], 00:06:02.403 "dhchap_dhgroups": [ 00:06:02.403 "null", 00:06:02.403 "ffdhe2048", 00:06:02.403 "ffdhe3072", 00:06:02.403 "ffdhe4096", 00:06:02.403 "ffdhe6144", 00:06:02.403 "ffdhe8192" 00:06:02.403 ] 00:06:02.403 } 00:06:02.403 }, 00:06:02.403 { 00:06:02.403 "method": "bdev_nvme_set_hotplug", 00:06:02.404 "params": { 00:06:02.404 "period_us": 100000, 00:06:02.404 "enable": false 00:06:02.404 } 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "method": "bdev_wait_for_examine" 00:06:02.404 } 00:06:02.404 ] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "scsi", 00:06:02.404 "config": null 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "scheduler", 00:06:02.404 "config": [ 00:06:02.404 { 00:06:02.404 "method": "framework_set_scheduler", 00:06:02.404 "params": { 00:06:02.404 "name": "static" 00:06:02.404 } 00:06:02.404 } 00:06:02.404 ] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "vhost_scsi", 00:06:02.404 "config": [] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "vhost_blk", 00:06:02.404 "config": [] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "ublk", 00:06:02.404 "config": [] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "nbd", 00:06:02.404 "config": [] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "nvmf", 00:06:02.404 "config": [ 00:06:02.404 { 00:06:02.404 "method": "nvmf_set_config", 00:06:02.404 "params": { 00:06:02.404 "discovery_filter": "match_any", 00:06:02.404 "admin_cmd_passthru": { 00:06:02.404 "identify_ctrlr": false 00:06:02.404 }, 00:06:02.404 "dhchap_digests": [ 00:06:02.404 "sha256", 00:06:02.404 "sha384", 00:06:02.404 "sha512" 00:06:02.404 ], 00:06:02.404 "dhchap_dhgroups": [ 00:06:02.404 "null", 00:06:02.404 "ffdhe2048", 00:06:02.404 "ffdhe3072", 00:06:02.404 "ffdhe4096", 00:06:02.404 "ffdhe6144", 00:06:02.404 "ffdhe8192" 00:06:02.404 ] 00:06:02.404 } 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "method": "nvmf_set_max_subsystems", 00:06:02.404 "params": { 00:06:02.404 "max_subsystems": 1024 00:06:02.404 } 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "method": "nvmf_set_crdt", 00:06:02.404 "params": { 00:06:02.404 "crdt1": 0, 00:06:02.404 "crdt2": 0, 00:06:02.404 "crdt3": 0 00:06:02.404 } 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "method": "nvmf_create_transport", 00:06:02.404 "params": { 00:06:02.404 "trtype": "TCP", 00:06:02.404 "max_queue_depth": 128, 00:06:02.404 "max_io_qpairs_per_ctrlr": 127, 00:06:02.404 "in_capsule_data_size": 4096, 00:06:02.404 "max_io_size": 131072, 00:06:02.404 "io_unit_size": 131072, 00:06:02.404 "max_aq_depth": 128, 00:06:02.404 "num_shared_buffers": 511, 00:06:02.404 "buf_cache_size": 4294967295, 00:06:02.404 "dif_insert_or_strip": false, 00:06:02.404 "zcopy": false, 00:06:02.404 "c2h_success": true, 00:06:02.404 "sock_priority": 0, 00:06:02.404 "abort_timeout_sec": 1, 00:06:02.404 "ack_timeout": 0, 00:06:02.404 "data_wr_pool_size": 0 00:06:02.404 } 00:06:02.404 } 00:06:02.404 ] 00:06:02.404 }, 00:06:02.404 { 00:06:02.404 "subsystem": "iscsi", 00:06:02.404 "config": [ 00:06:02.404 { 00:06:02.404 "method": "iscsi_set_options", 00:06:02.404 "params": { 00:06:02.404 "node_base": "iqn.2016-06.io.spdk", 00:06:02.404 "max_sessions": 128, 00:06:02.404 "max_connections_per_session": 2, 00:06:02.404 "max_queue_depth": 64, 00:06:02.404 "default_time2wait": 2, 00:06:02.404 "default_time2retain": 20, 00:06:02.404 "first_burst_length": 8192, 00:06:02.404 "immediate_data": true, 00:06:02.404 "allow_duplicated_isid": false, 00:06:02.404 "error_recovery_level": 0, 00:06:02.404 "nop_timeout": 60, 00:06:02.404 "nop_in_interval": 30, 00:06:02.404 "disable_chap": false, 00:06:02.404 "require_chap": false, 00:06:02.404 "mutual_chap": false, 00:06:02.404 "chap_group": 0, 00:06:02.404 "max_large_datain_per_connection": 64, 00:06:02.404 "max_r2t_per_connection": 4, 00:06:02.404 "pdu_pool_size": 36864, 00:06:02.404 "immediate_data_pool_size": 16384, 00:06:02.404 "data_out_pool_size": 2048 00:06:02.404 } 00:06:02.404 } 00:06:02.404 ] 00:06:02.404 } 00:06:02.404 ] 00:06:02.404 } 00:06:02.404 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.404 00:28:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69838 00:06:02.404 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69838 ']' 00:06:02.404 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69838 00:06:02.404 00:28:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69838 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.404 killing process with pid 69838 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69838' 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69838 00:06:02.404 00:28:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69838 00:06:02.977 00:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69872 00:06:02.977 00:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.977 00:28:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69872 ']' 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.252 killing process with pid 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69872' 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69872 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.252 00:06:08.252 real 0m6.982s 00:06:08.252 user 0m6.343s 00:06:08.252 sys 0m0.887s 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.252 ************************************ 00:06:08.252 END TEST skip_rpc_with_json 00:06:08.252 ************************************ 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 00:28:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.252 00:28:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.252 00:28:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.252 00:28:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 ************************************ 00:06:08.252 START TEST skip_rpc_with_delay 00:06:08.252 ************************************ 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.252 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.253 00:28:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.253 [2024-11-27 00:28:45.013233] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.514 00:06:08.514 real 0m0.136s 00:06:08.514 user 0m0.063s 00:06:08.514 sys 0m0.071s 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.514 00:28:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.514 ************************************ 00:06:08.514 END TEST skip_rpc_with_delay 00:06:08.514 ************************************ 00:06:08.514 00:28:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.514 00:28:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.514 00:28:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.514 00:28:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.514 00:28:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.514 00:28:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.514 ************************************ 00:06:08.514 START TEST exit_on_failed_rpc_init 00:06:08.514 ************************************ 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:08.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69978 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69978 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69978 ']' 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.514 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.515 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.515 00:28:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.515 00:28:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.515 [2024-11-27 00:28:45.214507] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:08.515 [2024-11-27 00:28:45.214640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:06:08.774 [2024-11-27 00:28:45.371661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.774 [2024-11-27 00:28:45.405707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.341 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.341 [2024-11-27 00:28:46.120817] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:09.341 [2024-11-27 00:28:46.120952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69996 ] 00:06:09.599 [2024-11-27 00:28:46.279746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.599 [2024-11-27 00:28:46.299062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.599 [2024-11-27 00:28:46.299139] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.599 [2024-11-27 00:28:46.299159] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.599 [2024-11-27 00:28:46.299169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69978 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69978 ']' 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69978 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.599 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69978 00:06:09.857 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.857 killing process with pid 69978 00:06:09.857 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.858 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69978' 00:06:09.858 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69978 00:06:09.858 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69978 00:06:10.116 00:06:10.116 real 0m1.553s 00:06:10.116 user 0m1.637s 00:06:10.116 sys 0m0.441s 00:06:10.116 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.116 00:28:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 ************************************ 00:06:10.116 END TEST exit_on_failed_rpc_init 00:06:10.116 ************************************ 00:06:10.116 00:28:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.116 00:06:10.116 real 0m14.402s 00:06:10.116 user 0m12.999s 00:06:10.116 sys 0m2.001s 00:06:10.116 00:28:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.116 00:28:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 ************************************ 00:06:10.116 END TEST skip_rpc 00:06:10.116 ************************************ 00:06:10.116 00:28:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.116 00:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.116 00:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.116 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:06:10.116 ************************************ 00:06:10.116 START TEST rpc_client 00:06:10.116 ************************************ 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.116 * Looking for test storage... 00:06:10.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.116 00:28:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.116 --rc genhtml_branch_coverage=1 00:06:10.116 --rc genhtml_function_coverage=1 00:06:10.116 --rc genhtml_legend=1 00:06:10.116 --rc geninfo_all_blocks=1 00:06:10.116 --rc geninfo_unexecuted_blocks=1 00:06:10.116 00:06:10.116 ' 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.116 --rc genhtml_branch_coverage=1 00:06:10.116 --rc genhtml_function_coverage=1 00:06:10.116 --rc genhtml_legend=1 00:06:10.116 --rc geninfo_all_blocks=1 00:06:10.116 --rc geninfo_unexecuted_blocks=1 00:06:10.116 00:06:10.116 ' 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.116 --rc genhtml_branch_coverage=1 00:06:10.116 --rc genhtml_function_coverage=1 00:06:10.116 --rc genhtml_legend=1 00:06:10.116 --rc geninfo_all_blocks=1 00:06:10.116 --rc geninfo_unexecuted_blocks=1 00:06:10.116 00:06:10.116 ' 00:06:10.116 00:28:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.116 --rc genhtml_branch_coverage=1 00:06:10.116 --rc genhtml_function_coverage=1 00:06:10.117 --rc genhtml_legend=1 00:06:10.117 --rc geninfo_all_blocks=1 00:06:10.117 --rc geninfo_unexecuted_blocks=1 00:06:10.117 00:06:10.117 ' 00:06:10.117 00:28:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.375 OK 00:06:10.375 00:28:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.375 00:06:10.375 real 0m0.178s 00:06:10.375 user 0m0.120s 00:06:10.375 sys 0m0.069s 00:06:10.375 00:28:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.375 00:28:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.375 ************************************ 00:06:10.375 END TEST rpc_client 00:06:10.375 ************************************ 00:06:10.375 00:28:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.375 00:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.375 00:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.375 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:06:10.375 ************************************ 00:06:10.375 START TEST json_config 00:06:10.375 ************************************ 00:06:10.375 00:28:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.375 00:28:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.375 00:28:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.375 00:28:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.375 00:28:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.375 00:28:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.375 00:28:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.375 00:28:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.375 00:28:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.375 00:28:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.375 00:28:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:10.375 00:28:47 json_config -- scripts/common.sh@345 -- # : 1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.375 00:28:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.375 00:28:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@353 -- # local d=1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.375 00:28:47 json_config -- scripts/common.sh@355 -- # echo 1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.375 00:28:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@353 -- # local d=2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.375 00:28:47 json_config -- scripts/common.sh@355 -- # echo 2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.375 00:28:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.375 00:28:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.376 00:28:47 json_config -- scripts/common.sh@368 -- # return 0 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.376 --rc genhtml_branch_coverage=1 00:06:10.376 --rc genhtml_function_coverage=1 00:06:10.376 --rc genhtml_legend=1 00:06:10.376 --rc geninfo_all_blocks=1 00:06:10.376 --rc geninfo_unexecuted_blocks=1 00:06:10.376 00:06:10.376 ' 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.376 --rc genhtml_branch_coverage=1 00:06:10.376 --rc genhtml_function_coverage=1 00:06:10.376 --rc genhtml_legend=1 00:06:10.376 --rc geninfo_all_blocks=1 00:06:10.376 --rc geninfo_unexecuted_blocks=1 00:06:10.376 00:06:10.376 ' 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.376 --rc genhtml_branch_coverage=1 00:06:10.376 --rc genhtml_function_coverage=1 00:06:10.376 --rc genhtml_legend=1 00:06:10.376 --rc geninfo_all_blocks=1 00:06:10.376 --rc geninfo_unexecuted_blocks=1 00:06:10.376 00:06:10.376 ' 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.376 --rc genhtml_branch_coverage=1 00:06:10.376 --rc genhtml_function_coverage=1 00:06:10.376 --rc genhtml_legend=1 00:06:10.376 --rc geninfo_all_blocks=1 00:06:10.376 --rc geninfo_unexecuted_blocks=1 00:06:10.376 00:06:10.376 ' 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f838944b-503a-4293-87ba-5ffd451304f8 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f838944b-503a-4293-87ba-5ffd451304f8 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.376 00:28:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.376 00:28:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.376 00:28:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.376 00:28:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.376 00:28:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.376 00:28:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.376 00:28:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.376 00:28:47 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.376 00:28:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@51 -- # : 0 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.376 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.376 00:28:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.376 WARNING: No tests are enabled so not running JSON configuration tests 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:10.376 00:28:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:10.376 00:06:10.376 real 0m0.129s 00:06:10.376 user 0m0.080s 00:06:10.376 sys 0m0.054s 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.376 00:28:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.376 ************************************ 00:06:10.376 END TEST json_config 00:06:10.376 ************************************ 00:06:10.376 00:28:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.376 00:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.376 00:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.376 00:28:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.376 ************************************ 00:06:10.376 START TEST json_config_extra_key 00:06:10.376 ************************************ 00:06:10.376 00:28:47 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.635 --rc genhtml_branch_coverage=1 00:06:10.635 --rc genhtml_function_coverage=1 00:06:10.635 --rc genhtml_legend=1 00:06:10.635 --rc geninfo_all_blocks=1 00:06:10.635 --rc geninfo_unexecuted_blocks=1 00:06:10.635 00:06:10.635 ' 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.635 --rc genhtml_branch_coverage=1 00:06:10.635 --rc genhtml_function_coverage=1 00:06:10.635 --rc genhtml_legend=1 00:06:10.635 --rc geninfo_all_blocks=1 00:06:10.635 --rc geninfo_unexecuted_blocks=1 00:06:10.635 00:06:10.635 ' 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.635 --rc genhtml_branch_coverage=1 00:06:10.635 --rc genhtml_function_coverage=1 00:06:10.635 --rc genhtml_legend=1 00:06:10.635 --rc geninfo_all_blocks=1 00:06:10.635 --rc geninfo_unexecuted_blocks=1 00:06:10.635 00:06:10.635 ' 00:06:10.635 00:28:47 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.635 --rc genhtml_branch_coverage=1 00:06:10.635 --rc genhtml_function_coverage=1 00:06:10.635 --rc genhtml_legend=1 00:06:10.635 --rc geninfo_all_blocks=1 00:06:10.635 --rc geninfo_unexecuted_blocks=1 00:06:10.635 00:06:10.635 ' 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f838944b-503a-4293-87ba-5ffd451304f8 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f838944b-503a-4293-87ba-5ffd451304f8 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.635 00:28:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.635 00:28:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.635 00:28:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.635 00:28:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.635 00:28:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.635 00:28:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.635 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.635 00:28:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.635 INFO: launching applications... 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.635 00:28:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70174 00:06:10.635 Waiting for target to run... 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.635 00:28:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70174 /var/tmp/spdk_tgt.sock 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70174 ']' 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.636 00:28:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.636 00:28:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.636 [2024-11-27 00:28:47.334550] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:10.636 [2024-11-27 00:28:47.334650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70174 ] 00:06:10.894 [2024-11-27 00:28:47.640482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.894 [2024-11-27 00:28:47.654646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.463 00:28:48 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.463 00:06:11.463 00:28:48 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.463 INFO: shutting down applications... 00:06:11.463 00:28:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.463 00:28:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70174 ]] 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70174 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70174 00:06:11.463 00:28:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70174 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.035 SPDK target shutdown done 00:06:12.035 00:28:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.035 Success 00:06:12.035 00:28:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.035 00:06:12.035 real 0m1.540s 00:06:12.035 user 0m1.322s 00:06:12.035 sys 0m0.332s 00:06:12.035 00:28:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.035 00:28:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.035 ************************************ 00:06:12.035 END TEST json_config_extra_key 00:06:12.035 ************************************ 00:06:12.035 00:28:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.035 00:28:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.035 00:28:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.035 00:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:12.035 ************************************ 00:06:12.035 START TEST alias_rpc 00:06:12.035 ************************************ 00:06:12.035 00:28:48 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.035 * Looking for test storage... 00:06:12.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.035 00:28:48 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.035 00:28:48 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.035 00:28:48 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.294 00:28:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.294 --rc genhtml_branch_coverage=1 00:06:12.294 --rc genhtml_function_coverage=1 00:06:12.294 --rc genhtml_legend=1 00:06:12.294 --rc geninfo_all_blocks=1 00:06:12.294 --rc geninfo_unexecuted_blocks=1 00:06:12.294 00:06:12.294 ' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.294 --rc genhtml_branch_coverage=1 00:06:12.294 --rc genhtml_function_coverage=1 00:06:12.294 --rc genhtml_legend=1 00:06:12.294 --rc geninfo_all_blocks=1 00:06:12.294 --rc geninfo_unexecuted_blocks=1 00:06:12.294 00:06:12.294 ' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.294 --rc genhtml_branch_coverage=1 00:06:12.294 --rc genhtml_function_coverage=1 00:06:12.294 --rc genhtml_legend=1 00:06:12.294 --rc geninfo_all_blocks=1 00:06:12.294 --rc geninfo_unexecuted_blocks=1 00:06:12.294 00:06:12.294 ' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.294 --rc genhtml_branch_coverage=1 00:06:12.294 --rc genhtml_function_coverage=1 00:06:12.294 --rc genhtml_legend=1 00:06:12.294 --rc geninfo_all_blocks=1 00:06:12.294 --rc geninfo_unexecuted_blocks=1 00:06:12.294 00:06:12.294 ' 00:06:12.294 00:28:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.294 00:28:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70252 00:06:12.294 00:28:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70252 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70252 ']' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.294 00:28:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.294 00:28:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.294 [2024-11-27 00:28:48.942423] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:12.294 [2024-11-27 00:28:48.942930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70252 ] 00:06:12.553 [2024-11-27 00:28:49.101424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.553 [2024-11-27 00:28:49.125999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.117 00:28:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.117 00:28:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.117 00:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.376 00:28:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70252 00:06:13.376 00:28:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70252 ']' 00:06:13.376 00:28:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70252 00:06:13.376 00:28:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:13.376 00:28:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.376 00:28:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70252 00:06:13.376 00:28:50 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.376 killing process with pid 70252 00:06:13.376 00:28:50 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.376 00:28:50 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70252' 00:06:13.376 00:28:50 alias_rpc -- common/autotest_common.sh@973 -- # kill 70252 00:06:13.376 00:28:50 alias_rpc -- common/autotest_common.sh@978 -- # wait 70252 00:06:13.633 00:06:13.633 real 0m1.606s 00:06:13.633 user 0m1.708s 00:06:13.633 sys 0m0.395s 00:06:13.633 00:28:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.633 00:28:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.633 ************************************ 00:06:13.633 END TEST alias_rpc 00:06:13.633 ************************************ 00:06:13.633 00:28:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:13.633 00:28:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.633 00:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.633 00:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.633 00:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.633 ************************************ 00:06:13.633 START TEST spdkcli_tcp 00:06:13.633 ************************************ 00:06:13.633 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.891 * Looking for test storage... 00:06:13.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.891 00:28:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.891 --rc genhtml_branch_coverage=1 00:06:13.891 --rc genhtml_function_coverage=1 00:06:13.891 --rc genhtml_legend=1 00:06:13.891 --rc geninfo_all_blocks=1 00:06:13.891 --rc geninfo_unexecuted_blocks=1 00:06:13.891 00:06:13.891 ' 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.891 --rc genhtml_branch_coverage=1 00:06:13.891 --rc genhtml_function_coverage=1 00:06:13.891 --rc genhtml_legend=1 00:06:13.891 --rc geninfo_all_blocks=1 00:06:13.891 --rc geninfo_unexecuted_blocks=1 00:06:13.891 00:06:13.891 ' 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.891 --rc genhtml_branch_coverage=1 00:06:13.891 --rc genhtml_function_coverage=1 00:06:13.891 --rc genhtml_legend=1 00:06:13.891 --rc geninfo_all_blocks=1 00:06:13.891 --rc geninfo_unexecuted_blocks=1 00:06:13.891 00:06:13.891 ' 00:06:13.891 00:28:50 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.891 --rc genhtml_branch_coverage=1 00:06:13.891 --rc genhtml_function_coverage=1 00:06:13.891 --rc genhtml_legend=1 00:06:13.891 --rc geninfo_all_blocks=1 00:06:13.891 --rc geninfo_unexecuted_blocks=1 00:06:13.891 00:06:13.891 ' 00:06:13.891 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:13.891 00:28:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:13.891 00:28:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70332 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70332 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70332 ']' 00:06:13.892 00:28:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.892 00:28:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.892 [2024-11-27 00:28:50.589245] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:13.892 [2024-11-27 00:28:50.589385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70332 ] 00:06:14.150 [2024-11-27 00:28:50.748079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.150 [2024-11-27 00:28:50.773493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.150 [2024-11-27 00:28:50.773536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.717 00:28:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.717 00:28:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:14.717 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70349 00:06:14.717 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.717 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.977 [ 00:06:14.977 "bdev_malloc_delete", 00:06:14.977 "bdev_malloc_create", 00:06:14.977 "bdev_null_resize", 00:06:14.977 "bdev_null_delete", 00:06:14.977 "bdev_null_create", 00:06:14.977 "bdev_nvme_cuse_unregister", 00:06:14.977 "bdev_nvme_cuse_register", 00:06:14.977 "bdev_opal_new_user", 00:06:14.977 "bdev_opal_set_lock_state", 00:06:14.977 "bdev_opal_delete", 00:06:14.977 "bdev_opal_get_info", 00:06:14.977 "bdev_opal_create", 00:06:14.977 "bdev_nvme_opal_revert", 00:06:14.977 "bdev_nvme_opal_init", 00:06:14.977 "bdev_nvme_send_cmd", 00:06:14.977 "bdev_nvme_set_keys", 00:06:14.977 "bdev_nvme_get_path_iostat", 00:06:14.977 "bdev_nvme_get_mdns_discovery_info", 00:06:14.977 "bdev_nvme_stop_mdns_discovery", 00:06:14.977 "bdev_nvme_start_mdns_discovery", 00:06:14.977 "bdev_nvme_set_multipath_policy", 00:06:14.977 "bdev_nvme_set_preferred_path", 00:06:14.977 "bdev_nvme_get_io_paths", 00:06:14.977 "bdev_nvme_remove_error_injection", 00:06:14.977 "bdev_nvme_add_error_injection", 00:06:14.977 "bdev_nvme_get_discovery_info", 00:06:14.977 "bdev_nvme_stop_discovery", 00:06:14.977 "bdev_nvme_start_discovery", 00:06:14.977 "bdev_nvme_get_controller_health_info", 00:06:14.977 "bdev_nvme_disable_controller", 00:06:14.977 "bdev_nvme_enable_controller", 00:06:14.977 "bdev_nvme_reset_controller", 00:06:14.977 "bdev_nvme_get_transport_statistics", 00:06:14.977 "bdev_nvme_apply_firmware", 00:06:14.977 "bdev_nvme_detach_controller", 00:06:14.977 "bdev_nvme_get_controllers", 00:06:14.977 "bdev_nvme_attach_controller", 00:06:14.977 "bdev_nvme_set_hotplug", 00:06:14.977 "bdev_nvme_set_options", 00:06:14.977 "bdev_passthru_delete", 00:06:14.977 "bdev_passthru_create", 00:06:14.977 "bdev_lvol_set_parent_bdev", 00:06:14.977 "bdev_lvol_set_parent", 00:06:14.977 "bdev_lvol_check_shallow_copy", 00:06:14.977 "bdev_lvol_start_shallow_copy", 00:06:14.977 "bdev_lvol_grow_lvstore", 00:06:14.977 "bdev_lvol_get_lvols", 00:06:14.977 "bdev_lvol_get_lvstores", 00:06:14.977 "bdev_lvol_delete", 00:06:14.977 "bdev_lvol_set_read_only", 00:06:14.977 "bdev_lvol_resize", 00:06:14.977 "bdev_lvol_decouple_parent", 00:06:14.977 "bdev_lvol_inflate", 00:06:14.977 "bdev_lvol_rename", 00:06:14.977 "bdev_lvol_clone_bdev", 00:06:14.977 "bdev_lvol_clone", 00:06:14.977 "bdev_lvol_snapshot", 00:06:14.977 "bdev_lvol_create", 00:06:14.977 "bdev_lvol_delete_lvstore", 00:06:14.977 "bdev_lvol_rename_lvstore", 00:06:14.977 "bdev_lvol_create_lvstore", 00:06:14.977 "bdev_raid_set_options", 00:06:14.977 "bdev_raid_remove_base_bdev", 00:06:14.977 "bdev_raid_add_base_bdev", 00:06:14.977 "bdev_raid_delete", 00:06:14.977 "bdev_raid_create", 00:06:14.977 "bdev_raid_get_bdevs", 00:06:14.977 "bdev_error_inject_error", 00:06:14.977 "bdev_error_delete", 00:06:14.977 "bdev_error_create", 00:06:14.977 "bdev_split_delete", 00:06:14.977 "bdev_split_create", 00:06:14.977 "bdev_delay_delete", 00:06:14.977 "bdev_delay_create", 00:06:14.977 "bdev_delay_update_latency", 00:06:14.977 "bdev_zone_block_delete", 00:06:14.977 "bdev_zone_block_create", 00:06:14.977 "blobfs_create", 00:06:14.977 "blobfs_detect", 00:06:14.977 "blobfs_set_cache_size", 00:06:14.977 "bdev_xnvme_delete", 00:06:14.977 "bdev_xnvme_create", 00:06:14.977 "bdev_aio_delete", 00:06:14.977 "bdev_aio_rescan", 00:06:14.977 "bdev_aio_create", 00:06:14.977 "bdev_ftl_set_property", 00:06:14.977 "bdev_ftl_get_properties", 00:06:14.977 "bdev_ftl_get_stats", 00:06:14.977 "bdev_ftl_unmap", 00:06:14.977 "bdev_ftl_unload", 00:06:14.977 "bdev_ftl_delete", 00:06:14.977 "bdev_ftl_load", 00:06:14.977 "bdev_ftl_create", 00:06:14.977 "bdev_virtio_attach_controller", 00:06:14.977 "bdev_virtio_scsi_get_devices", 00:06:14.977 "bdev_virtio_detach_controller", 00:06:14.977 "bdev_virtio_blk_set_hotplug", 00:06:14.977 "bdev_iscsi_delete", 00:06:14.977 "bdev_iscsi_create", 00:06:14.977 "bdev_iscsi_set_options", 00:06:14.977 "accel_error_inject_error", 00:06:14.977 "ioat_scan_accel_module", 00:06:14.977 "dsa_scan_accel_module", 00:06:14.977 "iaa_scan_accel_module", 00:06:14.977 "keyring_file_remove_key", 00:06:14.977 "keyring_file_add_key", 00:06:14.977 "keyring_linux_set_options", 00:06:14.977 "fsdev_aio_delete", 00:06:14.977 "fsdev_aio_create", 00:06:14.977 "iscsi_get_histogram", 00:06:14.977 "iscsi_enable_histogram", 00:06:14.977 "iscsi_set_options", 00:06:14.977 "iscsi_get_auth_groups", 00:06:14.977 "iscsi_auth_group_remove_secret", 00:06:14.977 "iscsi_auth_group_add_secret", 00:06:14.977 "iscsi_delete_auth_group", 00:06:14.977 "iscsi_create_auth_group", 00:06:14.977 "iscsi_set_discovery_auth", 00:06:14.977 "iscsi_get_options", 00:06:14.977 "iscsi_target_node_request_logout", 00:06:14.977 "iscsi_target_node_set_redirect", 00:06:14.977 "iscsi_target_node_set_auth", 00:06:14.977 "iscsi_target_node_add_lun", 00:06:14.977 "iscsi_get_stats", 00:06:14.977 "iscsi_get_connections", 00:06:14.977 "iscsi_portal_group_set_auth", 00:06:14.977 "iscsi_start_portal_group", 00:06:14.977 "iscsi_delete_portal_group", 00:06:14.977 "iscsi_create_portal_group", 00:06:14.977 "iscsi_get_portal_groups", 00:06:14.977 "iscsi_delete_target_node", 00:06:14.977 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.977 "iscsi_target_node_add_pg_ig_maps", 00:06:14.977 "iscsi_create_target_node", 00:06:14.977 "iscsi_get_target_nodes", 00:06:14.977 "iscsi_delete_initiator_group", 00:06:14.977 "iscsi_initiator_group_remove_initiators", 00:06:14.977 "iscsi_initiator_group_add_initiators", 00:06:14.977 "iscsi_create_initiator_group", 00:06:14.977 "iscsi_get_initiator_groups", 00:06:14.977 "nvmf_set_crdt", 00:06:14.977 "nvmf_set_config", 00:06:14.977 "nvmf_set_max_subsystems", 00:06:14.977 "nvmf_stop_mdns_prr", 00:06:14.977 "nvmf_publish_mdns_prr", 00:06:14.977 "nvmf_subsystem_get_listeners", 00:06:14.977 "nvmf_subsystem_get_qpairs", 00:06:14.977 "nvmf_subsystem_get_controllers", 00:06:14.977 "nvmf_get_stats", 00:06:14.977 "nvmf_get_transports", 00:06:14.977 "nvmf_create_transport", 00:06:14.977 "nvmf_get_targets", 00:06:14.977 "nvmf_delete_target", 00:06:14.977 "nvmf_create_target", 00:06:14.977 "nvmf_subsystem_allow_any_host", 00:06:14.977 "nvmf_subsystem_set_keys", 00:06:14.977 "nvmf_subsystem_remove_host", 00:06:14.977 "nvmf_subsystem_add_host", 00:06:14.977 "nvmf_ns_remove_host", 00:06:14.977 "nvmf_ns_add_host", 00:06:14.977 "nvmf_subsystem_remove_ns", 00:06:14.977 "nvmf_subsystem_set_ns_ana_group", 00:06:14.977 "nvmf_subsystem_add_ns", 00:06:14.977 "nvmf_subsystem_listener_set_ana_state", 00:06:14.977 "nvmf_discovery_get_referrals", 00:06:14.977 "nvmf_discovery_remove_referral", 00:06:14.977 "nvmf_discovery_add_referral", 00:06:14.977 "nvmf_subsystem_remove_listener", 00:06:14.977 "nvmf_subsystem_add_listener", 00:06:14.977 "nvmf_delete_subsystem", 00:06:14.977 "nvmf_create_subsystem", 00:06:14.977 "nvmf_get_subsystems", 00:06:14.977 "env_dpdk_get_mem_stats", 00:06:14.977 "nbd_get_disks", 00:06:14.977 "nbd_stop_disk", 00:06:14.977 "nbd_start_disk", 00:06:14.977 "ublk_recover_disk", 00:06:14.977 "ublk_get_disks", 00:06:14.977 "ublk_stop_disk", 00:06:14.977 "ublk_start_disk", 00:06:14.977 "ublk_destroy_target", 00:06:14.977 "ublk_create_target", 00:06:14.977 "virtio_blk_create_transport", 00:06:14.977 "virtio_blk_get_transports", 00:06:14.977 "vhost_controller_set_coalescing", 00:06:14.977 "vhost_get_controllers", 00:06:14.977 "vhost_delete_controller", 00:06:14.977 "vhost_create_blk_controller", 00:06:14.977 "vhost_scsi_controller_remove_target", 00:06:14.977 "vhost_scsi_controller_add_target", 00:06:14.977 "vhost_start_scsi_controller", 00:06:14.977 "vhost_create_scsi_controller", 00:06:14.977 "thread_set_cpumask", 00:06:14.977 "scheduler_set_options", 00:06:14.977 "framework_get_governor", 00:06:14.977 "framework_get_scheduler", 00:06:14.977 "framework_set_scheduler", 00:06:14.977 "framework_get_reactors", 00:06:14.977 "thread_get_io_channels", 00:06:14.977 "thread_get_pollers", 00:06:14.977 "thread_get_stats", 00:06:14.977 "framework_monitor_context_switch", 00:06:14.977 "spdk_kill_instance", 00:06:14.977 "log_enable_timestamps", 00:06:14.977 "log_get_flags", 00:06:14.977 "log_clear_flag", 00:06:14.977 "log_set_flag", 00:06:14.977 "log_get_level", 00:06:14.977 "log_set_level", 00:06:14.977 "log_get_print_level", 00:06:14.977 "log_set_print_level", 00:06:14.977 "framework_enable_cpumask_locks", 00:06:14.977 "framework_disable_cpumask_locks", 00:06:14.977 "framework_wait_init", 00:06:14.977 "framework_start_init", 00:06:14.977 "scsi_get_devices", 00:06:14.977 "bdev_get_histogram", 00:06:14.977 "bdev_enable_histogram", 00:06:14.978 "bdev_set_qos_limit", 00:06:14.978 "bdev_set_qd_sampling_period", 00:06:14.978 "bdev_get_bdevs", 00:06:14.978 "bdev_reset_iostat", 00:06:14.978 "bdev_get_iostat", 00:06:14.978 "bdev_examine", 00:06:14.978 "bdev_wait_for_examine", 00:06:14.978 "bdev_set_options", 00:06:14.978 "accel_get_stats", 00:06:14.978 "accel_set_options", 00:06:14.978 "accel_set_driver", 00:06:14.978 "accel_crypto_key_destroy", 00:06:14.978 "accel_crypto_keys_get", 00:06:14.978 "accel_crypto_key_create", 00:06:14.978 "accel_assign_opc", 00:06:14.978 "accel_get_module_info", 00:06:14.978 "accel_get_opc_assignments", 00:06:14.978 "vmd_rescan", 00:06:14.978 "vmd_remove_device", 00:06:14.978 "vmd_enable", 00:06:14.978 "sock_get_default_impl", 00:06:14.978 "sock_set_default_impl", 00:06:14.978 "sock_impl_set_options", 00:06:14.978 "sock_impl_get_options", 00:06:14.978 "iobuf_get_stats", 00:06:14.978 "iobuf_set_options", 00:06:14.978 "keyring_get_keys", 00:06:14.978 "framework_get_pci_devices", 00:06:14.978 "framework_get_config", 00:06:14.978 "framework_get_subsystems", 00:06:14.978 "fsdev_set_opts", 00:06:14.978 "fsdev_get_opts", 00:06:14.978 "trace_get_info", 00:06:14.978 "trace_get_tpoint_group_mask", 00:06:14.978 "trace_disable_tpoint_group", 00:06:14.978 "trace_enable_tpoint_group", 00:06:14.978 "trace_clear_tpoint_mask", 00:06:14.978 "trace_set_tpoint_mask", 00:06:14.978 "notify_get_notifications", 00:06:14.978 "notify_get_types", 00:06:14.978 "spdk_get_version", 00:06:14.978 "rpc_get_methods" 00:06:14.978 ] 00:06:14.978 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.978 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.978 00:28:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70332 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70332 ']' 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70332 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70332 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.978 killing process with pid 70332 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70332' 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70332 00:06:14.978 00:28:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70332 00:06:15.237 00:06:15.237 real 0m1.622s 00:06:15.237 user 0m2.860s 00:06:15.237 sys 0m0.426s 00:06:15.237 00:28:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.237 00:28:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.237 ************************************ 00:06:15.237 END TEST spdkcli_tcp 00:06:15.237 ************************************ 00:06:15.496 00:28:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.496 00:28:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.496 00:28:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.496 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.496 ************************************ 00:06:15.496 START TEST dpdk_mem_utility 00:06:15.496 ************************************ 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.496 * Looking for test storage... 00:06:15.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.496 00:28:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.496 --rc genhtml_branch_coverage=1 00:06:15.496 --rc genhtml_function_coverage=1 00:06:15.496 --rc genhtml_legend=1 00:06:15.496 --rc geninfo_all_blocks=1 00:06:15.496 --rc geninfo_unexecuted_blocks=1 00:06:15.496 00:06:15.496 ' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.496 --rc genhtml_branch_coverage=1 00:06:15.496 --rc genhtml_function_coverage=1 00:06:15.496 --rc genhtml_legend=1 00:06:15.496 --rc geninfo_all_blocks=1 00:06:15.496 --rc geninfo_unexecuted_blocks=1 00:06:15.496 00:06:15.496 ' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.496 --rc genhtml_branch_coverage=1 00:06:15.496 --rc genhtml_function_coverage=1 00:06:15.496 --rc genhtml_legend=1 00:06:15.496 --rc geninfo_all_blocks=1 00:06:15.496 --rc geninfo_unexecuted_blocks=1 00:06:15.496 00:06:15.496 ' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.496 --rc genhtml_branch_coverage=1 00:06:15.496 --rc genhtml_function_coverage=1 00:06:15.496 --rc genhtml_legend=1 00:06:15.496 --rc geninfo_all_blocks=1 00:06:15.496 --rc geninfo_unexecuted_blocks=1 00:06:15.496 00:06:15.496 ' 00:06:15.496 00:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.496 00:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70426 00:06:15.496 00:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70426 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70426 ']' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.496 00:28:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.496 00:28:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.496 [2024-11-27 00:28:52.252026] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:15.496 [2024-11-27 00:28:52.252159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70426 ] 00:06:15.755 [2024-11-27 00:28:52.412493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.755 [2024-11-27 00:28:52.437010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.321 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.321 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:16.321 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.321 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.321 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.321 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.321 { 00:06:16.321 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.321 } 00:06:16.321 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.321 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.581 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:16.581 1 heaps totaling size 818.000000 MiB 00:06:16.581 size: 818.000000 MiB heap id: 0 00:06:16.581 end heaps---------- 00:06:16.581 9 mempools totaling size 603.782043 MiB 00:06:16.581 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.581 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.581 size: 100.555481 MiB name: bdev_io_70426 00:06:16.581 size: 50.003479 MiB name: msgpool_70426 00:06:16.581 size: 36.509338 MiB name: fsdev_io_70426 00:06:16.581 size: 21.763794 MiB name: PDU_Pool 00:06:16.581 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.581 size: 4.133484 MiB name: evtpool_70426 00:06:16.581 size: 0.026123 MiB name: Session_Pool 00:06:16.581 end mempools------- 00:06:16.581 6 memzones totaling size 4.142822 MiB 00:06:16.581 size: 1.000366 MiB name: RG_ring_0_70426 00:06:16.581 size: 1.000366 MiB name: RG_ring_1_70426 00:06:16.581 size: 1.000366 MiB name: RG_ring_4_70426 00:06:16.581 size: 1.000366 MiB name: RG_ring_5_70426 00:06:16.581 size: 0.125366 MiB name: RG_ring_2_70426 00:06:16.581 size: 0.015991 MiB name: RG_ring_3_70426 00:06:16.581 end memzones------- 00:06:16.581 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.581 heap id: 0 total size: 818.000000 MiB number of busy elements: 314 number of free elements: 15 00:06:16.581 list of free elements. size: 10.803040 MiB 00:06:16.581 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:16.581 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:16.581 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:16.581 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:16.581 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:16.581 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:16.581 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:16.581 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:16.581 element at address: 0x20001ae00000 with size: 0.568237 MiB 00:06:16.581 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:16.581 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:16.581 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:16.581 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:16.581 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:16.581 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:16.581 list of standard malloc elements. size: 199.268066 MiB 00:06:16.581 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:16.581 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:16.581 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.581 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:16.581 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:16.581 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.581 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:16.581 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.581 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:16.581 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:16.581 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:16.581 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:16.582 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:16.582 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:16.582 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:16.582 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:16.583 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:16.583 list of memzone associated elements. size: 607.928894 MiB 00:06:16.583 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:16.583 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.583 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:16.583 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.583 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:16.583 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70426_0 00:06:16.583 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:16.583 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70426_0 00:06:16.583 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:16.583 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70426_0 00:06:16.583 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:16.583 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.583 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:16.583 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.583 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:16.583 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70426_0 00:06:16.583 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:16.583 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70426 00:06:16.583 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.583 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70426 00:06:16.583 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:16.583 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.583 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:16.583 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.583 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:16.583 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.583 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:16.583 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.583 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:16.583 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70426 00:06:16.583 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:16.583 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70426 00:06:16.583 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:16.583 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70426 00:06:16.583 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:16.583 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70426 00:06:16.583 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:16.583 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70426 00:06:16.583 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:16.583 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70426 00:06:16.583 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:16.583 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.583 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:16.583 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.583 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:16.583 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.583 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:16.583 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70426 00:06:16.583 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:16.583 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70426 00:06:16.583 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:16.583 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.583 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:16.583 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.583 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:16.583 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70426 00:06:16.583 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:16.583 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.583 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:16.583 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70426 00:06:16.583 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:16.583 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70426 00:06:16.583 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:16.583 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70426 00:06:16.583 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:16.583 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.583 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.583 00:28:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70426 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70426 ']' 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70426 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70426 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.583 killing process with pid 70426 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70426' 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70426 00:06:16.583 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70426 00:06:16.843 00:06:16.843 real 0m1.486s 00:06:16.843 user 0m1.472s 00:06:16.843 sys 0m0.418s 00:06:16.843 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.843 00:28:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.843 ************************************ 00:06:16.843 END TEST dpdk_mem_utility 00:06:16.843 ************************************ 00:06:16.843 00:28:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.843 00:28:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.843 00:28:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.843 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.843 ************************************ 00:06:16.843 START TEST event 00:06:16.843 ************************************ 00:06:16.843 00:28:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:16.843 * Looking for test storage... 00:06:17.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.102 00:28:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.102 00:28:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.102 00:28:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.102 00:28:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.102 00:28:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.102 00:28:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.102 00:28:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.102 00:28:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.102 00:28:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.102 00:28:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.102 00:28:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.102 00:28:53 event -- scripts/common.sh@344 -- # case "$op" in 00:06:17.102 00:28:53 event -- scripts/common.sh@345 -- # : 1 00:06:17.102 00:28:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.102 00:28:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.102 00:28:53 event -- scripts/common.sh@365 -- # decimal 1 00:06:17.102 00:28:53 event -- scripts/common.sh@353 -- # local d=1 00:06:17.102 00:28:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.102 00:28:53 event -- scripts/common.sh@355 -- # echo 1 00:06:17.102 00:28:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.102 00:28:53 event -- scripts/common.sh@366 -- # decimal 2 00:06:17.102 00:28:53 event -- scripts/common.sh@353 -- # local d=2 00:06:17.102 00:28:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.102 00:28:53 event -- scripts/common.sh@355 -- # echo 2 00:06:17.102 00:28:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.102 00:28:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.102 00:28:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.102 00:28:53 event -- scripts/common.sh@368 -- # return 0 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.102 --rc genhtml_branch_coverage=1 00:06:17.102 --rc genhtml_function_coverage=1 00:06:17.102 --rc genhtml_legend=1 00:06:17.102 --rc geninfo_all_blocks=1 00:06:17.102 --rc geninfo_unexecuted_blocks=1 00:06:17.102 00:06:17.102 ' 00:06:17.102 00:28:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.102 00:28:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.102 00:28:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:17.102 00:28:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.102 00:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.102 ************************************ 00:06:17.102 START TEST event_perf 00:06:17.102 ************************************ 00:06:17.102 00:28:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.102 Running I/O for 1 seconds...[2024-11-27 00:28:53.738339] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:17.102 [2024-11-27 00:28:53.738455] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70507 ] 00:06:17.360 [2024-11-27 00:28:53.896480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.360 [2024-11-27 00:28:53.919302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.360 [2024-11-27 00:28:53.919429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.360 Running I/O for 1 seconds...[2024-11-27 00:28:53.919708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.360 [2024-11-27 00:28:53.919803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.294 00:06:18.294 lcore 0: 200811 00:06:18.294 lcore 1: 200814 00:06:18.294 lcore 2: 200816 00:06:18.294 lcore 3: 200817 00:06:18.294 done. 00:06:18.294 ************************************ 00:06:18.294 END TEST event_perf 00:06:18.294 ************************************ 00:06:18.294 00:06:18.294 real 0m1.264s 00:06:18.294 user 0m4.059s 00:06:18.294 sys 0m0.087s 00:06:18.294 00:28:54 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.294 00:28:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.294 00:28:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.294 00:28:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:18.294 00:28:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.294 00:28:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.294 ************************************ 00:06:18.294 START TEST event_reactor 00:06:18.294 ************************************ 00:06:18.294 00:28:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.294 [2024-11-27 00:28:55.043074] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:18.294 [2024-11-27 00:28:55.043187] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:06:18.552 [2024-11-27 00:28:55.200083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.553 [2024-11-27 00:28:55.223843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.487 test_start 00:06:19.487 oneshot 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.487 tick 250 00:06:19.487 tick 100 00:06:19.487 tick 100 00:06:19.488 tick 100 00:06:19.488 tick 250 00:06:19.488 tick 500 00:06:19.488 tick 100 00:06:19.488 tick 100 00:06:19.488 tick 250 00:06:19.488 tick 100 00:06:19.488 tick 100 00:06:19.488 test_end 00:06:19.488 00:06:19.488 real 0m1.254s 00:06:19.488 user 0m1.083s 00:06:19.488 sys 0m0.064s 00:06:19.488 00:28:56 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.488 00:28:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.488 ************************************ 00:06:19.488 END TEST event_reactor 00:06:19.488 ************************************ 00:06:19.746 00:28:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.746 00:28:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:19.746 00:28:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.746 00:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.746 ************************************ 00:06:19.746 START TEST event_reactor_perf 00:06:19.746 ************************************ 00:06:19.746 00:28:56 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.746 [2024-11-27 00:28:56.342802] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:19.746 [2024-11-27 00:28:56.342914] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70577 ] 00:06:19.746 [2024-11-27 00:28:56.499164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.746 [2024-11-27 00:28:56.522282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.122 test_start 00:06:21.122 test_end 00:06:21.122 Performance: 312497 events per second 00:06:21.122 00:06:21.122 real 0m1.251s 00:06:21.122 user 0m1.084s 00:06:21.122 sys 0m0.060s 00:06:21.122 00:28:57 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.122 00:28:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 ************************************ 00:06:21.122 END TEST event_reactor_perf 00:06:21.122 ************************************ 00:06:21.122 00:28:57 event -- event/event.sh@49 -- # uname -s 00:06:21.122 00:28:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.122 00:28:57 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.122 00:28:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.122 00:28:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.122 00:28:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 ************************************ 00:06:21.122 START TEST event_scheduler 00:06:21.122 ************************************ 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.122 * Looking for test storage... 00:06:21.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.122 00:28:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.122 --rc genhtml_branch_coverage=1 00:06:21.122 --rc genhtml_function_coverage=1 00:06:21.122 --rc genhtml_legend=1 00:06:21.122 --rc geninfo_all_blocks=1 00:06:21.122 --rc geninfo_unexecuted_blocks=1 00:06:21.122 00:06:21.122 ' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.122 --rc genhtml_branch_coverage=1 00:06:21.122 --rc genhtml_function_coverage=1 00:06:21.122 --rc genhtml_legend=1 00:06:21.122 --rc geninfo_all_blocks=1 00:06:21.122 --rc geninfo_unexecuted_blocks=1 00:06:21.122 00:06:21.122 ' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.122 --rc genhtml_branch_coverage=1 00:06:21.122 --rc genhtml_function_coverage=1 00:06:21.122 --rc genhtml_legend=1 00:06:21.122 --rc geninfo_all_blocks=1 00:06:21.122 --rc geninfo_unexecuted_blocks=1 00:06:21.122 00:06:21.122 ' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.122 --rc genhtml_branch_coverage=1 00:06:21.122 --rc genhtml_function_coverage=1 00:06:21.122 --rc genhtml_legend=1 00:06:21.122 --rc geninfo_all_blocks=1 00:06:21.122 --rc geninfo_unexecuted_blocks=1 00:06:21.122 00:06:21.122 ' 00:06:21.122 00:28:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.122 00:28:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70648 00:06:21.122 00:28:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.122 00:28:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70648 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70648 ']' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.122 00:28:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 00:28:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.122 [2024-11-27 00:28:57.814951] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:21.122 [2024-11-27 00:28:57.815062] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:06:21.381 [2024-11-27 00:28:57.972183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.381 [2024-11-27 00:28:57.994760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.381 [2024-11-27 00:28:57.994947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.381 [2024-11-27 00:28:57.994974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.381 [2024-11-27 00:28:57.995036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:21.948 00:28:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.948 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.948 POWER: Cannot set governor of lcore 0 to performance 00:06:21.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.948 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.948 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.948 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.948 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:21.948 POWER: Unable to set Power Management Environment for lcore 0 00:06:21.948 [2024-11-27 00:28:58.624557] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:21.948 [2024-11-27 00:28:58.624587] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:21.948 [2024-11-27 00:28:58.624606] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.948 [2024-11-27 00:28:58.624633] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.948 [2024-11-27 00:28:58.624641] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.948 [2024-11-27 00:28:58.624650] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.948 00:28:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.948 [2024-11-27 00:28:58.683070] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.948 00:28:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.948 00:28:58 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.949 ************************************ 00:06:21.949 START TEST scheduler_create_thread 00:06:21.949 ************************************ 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.949 2 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.949 3 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.949 4 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.949 5 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.949 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.210 6 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.210 7 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.210 8 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.210 9 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.210 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.211 10 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.211 00:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.593 00:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.593 00:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.593 00:29:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.593 00:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.593 00:29:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.536 00:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.536 00:06:24.536 real 0m2.612s 00:06:24.536 user 0m0.008s 00:06:24.536 sys 0m0.011s 00:06:24.536 00:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.536 00:29:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.536 ************************************ 00:06:24.536 END TEST scheduler_create_thread 00:06:24.536 ************************************ 00:06:24.796 00:29:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.796 00:29:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70648 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70648 ']' 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70648 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70648 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:24.796 killing process with pid 70648 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70648' 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70648 00:06:24.796 00:29:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70648 00:06:25.053 [2024-11-27 00:29:01.786757] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.312 00:06:25.312 real 0m4.339s 00:06:25.312 user 0m7.881s 00:06:25.312 sys 0m0.335s 00:06:25.312 00:29:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.312 00:29:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.312 ************************************ 00:06:25.312 END TEST event_scheduler 00:06:25.312 ************************************ 00:06:25.312 00:29:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.312 00:29:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.312 00:29:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.312 00:29:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.312 00:29:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.312 ************************************ 00:06:25.312 START TEST app_repeat 00:06:25.312 ************************************ 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70743 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.312 Process app_repeat pid: 70743 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70743' 00:06:25.312 spdk_app_start Round 0 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70743 /var/tmp/spdk-nbd.sock 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70743 ']' 00:06:25.312 00:29:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.312 00:29:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.312 [2024-11-27 00:29:02.048084] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:25.312 [2024-11-27 00:29:02.048188] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70743 ] 00:06:25.570 [2024-11-27 00:29:02.200208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.570 [2024-11-27 00:29:02.224431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.570 [2024-11-27 00:29:02.224465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.135 00:29:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.135 00:29:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.135 00:29:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.393 Malloc0 00:06:26.393 00:29:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.650 Malloc1 00:06:26.650 00:29:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.650 00:29:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.651 00:29:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.908 /dev/nbd0 00:06:26.908 00:29:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.908 00:29:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.908 1+0 records in 00:06:26.908 1+0 records out 00:06:26.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301235 s, 13.6 MB/s 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.908 00:29:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.908 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.908 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.908 00:29:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.166 /dev/nbd1 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.166 1+0 records in 00:06:27.166 1+0 records out 00:06:27.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457152 s, 9.0 MB/s 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.166 00:29:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.166 00:29:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.424 { 00:06:27.424 "nbd_device": "/dev/nbd0", 00:06:27.424 "bdev_name": "Malloc0" 00:06:27.424 }, 00:06:27.424 { 00:06:27.424 "nbd_device": "/dev/nbd1", 00:06:27.424 "bdev_name": "Malloc1" 00:06:27.424 } 00:06:27.424 ]' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.424 { 00:06:27.424 "nbd_device": "/dev/nbd0", 00:06:27.424 "bdev_name": "Malloc0" 00:06:27.424 }, 00:06:27.424 { 00:06:27.424 "nbd_device": "/dev/nbd1", 00:06:27.424 "bdev_name": "Malloc1" 00:06:27.424 } 00:06:27.424 ]' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.424 /dev/nbd1' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.424 /dev/nbd1' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.424 256+0 records in 00:06:27.424 256+0 records out 00:06:27.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067126 s, 156 MB/s 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.424 256+0 records in 00:06:27.424 256+0 records out 00:06:27.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148527 s, 70.6 MB/s 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.424 256+0 records in 00:06:27.424 256+0 records out 00:06:27.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016507 s, 63.5 MB/s 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.424 00:29:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.683 00:29:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.941 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.199 00:29:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.199 00:29:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.457 00:29:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.457 [2024-11-27 00:29:05.092961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.457 [2024-11-27 00:29:05.113548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.457 [2024-11-27 00:29:05.113646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.457 [2024-11-27 00:29:05.154356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.457 [2024-11-27 00:29:05.154414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.737 spdk_app_start Round 1 00:06:31.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.737 00:29:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.737 00:29:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.737 00:29:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70743 /var/tmp/spdk-nbd.sock 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70743 ']' 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.737 00:29:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.737 00:29:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.737 00:29:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.737 00:29:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.737 Malloc0 00:06:31.737 00:29:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.995 Malloc1 00:06:31.995 00:29:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.995 00:29:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.253 /dev/nbd0 00:06:32.253 00:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.253 00:29:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.253 00:29:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.254 1+0 records in 00:06:32.254 1+0 records out 00:06:32.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468274 s, 8.7 MB/s 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.254 00:29:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.254 00:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.254 00:29:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.254 00:29:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.254 /dev/nbd1 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.512 1+0 records in 00:06:32.512 1+0 records out 00:06:32.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559042 s, 7.3 MB/s 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.512 00:29:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.512 { 00:06:32.512 "nbd_device": "/dev/nbd0", 00:06:32.512 "bdev_name": "Malloc0" 00:06:32.512 }, 00:06:32.512 { 00:06:32.512 "nbd_device": "/dev/nbd1", 00:06:32.512 "bdev_name": "Malloc1" 00:06:32.512 } 00:06:32.512 ]' 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.512 { 00:06:32.512 "nbd_device": "/dev/nbd0", 00:06:32.512 "bdev_name": "Malloc0" 00:06:32.512 }, 00:06:32.512 { 00:06:32.512 "nbd_device": "/dev/nbd1", 00:06:32.512 "bdev_name": "Malloc1" 00:06:32.512 } 00:06:32.512 ]' 00:06:32.512 00:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.770 /dev/nbd1' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.770 /dev/nbd1' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.770 256+0 records in 00:06:32.770 256+0 records out 00:06:32.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682833 s, 154 MB/s 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.770 256+0 records in 00:06:32.770 256+0 records out 00:06:32.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166965 s, 62.8 MB/s 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.770 256+0 records in 00:06:32.770 256+0 records out 00:06:32.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208178 s, 50.4 MB/s 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.770 00:29:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.029 00:29:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.287 00:29:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.287 00:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.545 00:29:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.545 00:29:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.545 00:29:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.545 00:29:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.545 00:29:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.545 00:29:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.545 00:29:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.804 [2024-11-27 00:29:10.398824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.804 [2024-11-27 00:29:10.419449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.804 [2024-11-27 00:29:10.419454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.804 [2024-11-27 00:29:10.460815] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.804 [2024-11-27 00:29:10.460879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.085 spdk_app_start Round 2 00:06:37.085 00:29:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.085 00:29:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.085 00:29:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70743 /var/tmp/spdk-nbd.sock 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70743 ']' 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.085 00:29:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:37.085 00:29:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.085 Malloc0 00:06:37.085 00:29:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.343 Malloc1 00:06:37.343 00:29:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.343 00:29:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.343 /dev/nbd0 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.601 1+0 records in 00:06:37.601 1+0 records out 00:06:37.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206916 s, 19.8 MB/s 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.601 /dev/nbd1 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.601 00:29:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.601 00:29:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.601 1+0 records in 00:06:37.601 1+0 records out 00:06:37.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431256 s, 9.5 MB/s 00:06:37.859 00:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.859 00:29:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.859 00:29:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.859 00:29:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.859 00:29:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.859 { 00:06:37.859 "nbd_device": "/dev/nbd0", 00:06:37.859 "bdev_name": "Malloc0" 00:06:37.859 }, 00:06:37.859 { 00:06:37.859 "nbd_device": "/dev/nbd1", 00:06:37.859 "bdev_name": "Malloc1" 00:06:37.859 } 00:06:37.859 ]' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.859 { 00:06:37.859 "nbd_device": "/dev/nbd0", 00:06:37.859 "bdev_name": "Malloc0" 00:06:37.859 }, 00:06:37.859 { 00:06:37.859 "nbd_device": "/dev/nbd1", 00:06:37.859 "bdev_name": "Malloc1" 00:06:37.859 } 00:06:37.859 ]' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.859 /dev/nbd1' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.859 /dev/nbd1' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.859 00:29:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.117 256+0 records in 00:06:38.117 256+0 records out 00:06:38.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00726974 s, 144 MB/s 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.117 256+0 records in 00:06:38.117 256+0 records out 00:06:38.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148023 s, 70.8 MB/s 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.117 256+0 records in 00:06:38.117 256+0 records out 00:06:38.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165099 s, 63.5 MB/s 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.117 00:29:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.376 00:29:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.376 00:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.634 00:29:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.634 00:29:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.892 00:29:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.150 [2024-11-27 00:29:15.707367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.150 [2024-11-27 00:29:15.728463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.150 [2024-11-27 00:29:15.728561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.150 [2024-11-27 00:29:15.768606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.150 [2024-11-27 00:29:15.768657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.433 00:29:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70743 /var/tmp/spdk-nbd.sock 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70743 ']' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:42.433 00:29:18 event.app_repeat -- event/event.sh@39 -- # killprocess 70743 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70743 ']' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70743 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70743 00:06:42.433 killing process with pid 70743 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70743' 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70743 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70743 00:06:42.433 spdk_app_start is called in Round 0. 00:06:42.433 Shutdown signal received, stop current app iteration 00:06:42.433 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 reinitialization... 00:06:42.433 spdk_app_start is called in Round 1. 00:06:42.433 Shutdown signal received, stop current app iteration 00:06:42.433 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 reinitialization... 00:06:42.433 spdk_app_start is called in Round 2. 00:06:42.433 Shutdown signal received, stop current app iteration 00:06:42.433 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 reinitialization... 00:06:42.433 spdk_app_start is called in Round 3. 00:06:42.433 Shutdown signal received, stop current app iteration 00:06:42.433 00:29:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:42.433 00:29:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:42.433 00:06:42.433 real 0m16.975s 00:06:42.433 user 0m37.841s 00:06:42.433 sys 0m2.181s 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.433 ************************************ 00:06:42.433 END TEST app_repeat 00:06:42.433 ************************************ 00:06:42.433 00:29:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.433 00:29:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:42.433 00:29:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:42.433 00:29:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.433 00:29:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.433 00:29:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.433 ************************************ 00:06:42.433 START TEST cpu_locks 00:06:42.433 ************************************ 00:06:42.433 00:29:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:42.433 * Looking for test storage... 00:06:42.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:42.433 00:29:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.433 00:29:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.433 00:29:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.433 00:29:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.433 00:29:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.434 00:29:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.434 --rc genhtml_branch_coverage=1 00:06:42.434 --rc genhtml_function_coverage=1 00:06:42.434 --rc genhtml_legend=1 00:06:42.434 --rc geninfo_all_blocks=1 00:06:42.434 --rc geninfo_unexecuted_blocks=1 00:06:42.434 00:06:42.434 ' 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.434 --rc genhtml_branch_coverage=1 00:06:42.434 --rc genhtml_function_coverage=1 00:06:42.434 --rc genhtml_legend=1 00:06:42.434 --rc geninfo_all_blocks=1 00:06:42.434 --rc geninfo_unexecuted_blocks=1 00:06:42.434 00:06:42.434 ' 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.434 --rc genhtml_branch_coverage=1 00:06:42.434 --rc genhtml_function_coverage=1 00:06:42.434 --rc genhtml_legend=1 00:06:42.434 --rc geninfo_all_blocks=1 00:06:42.434 --rc geninfo_unexecuted_blocks=1 00:06:42.434 00:06:42.434 ' 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.434 --rc genhtml_branch_coverage=1 00:06:42.434 --rc genhtml_function_coverage=1 00:06:42.434 --rc genhtml_legend=1 00:06:42.434 --rc geninfo_all_blocks=1 00:06:42.434 --rc geninfo_unexecuted_blocks=1 00:06:42.434 00:06:42.434 ' 00:06:42.434 00:29:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:42.434 00:29:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:42.434 00:29:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:42.434 00:29:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.434 00:29:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.434 ************************************ 00:06:42.434 START TEST default_locks 00:06:42.434 ************************************ 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71168 00:06:42.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71168 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71168 ']' 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.434 00:29:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.693 [2024-11-27 00:29:19.267867] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:42.693 [2024-11-27 00:29:19.268112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71168 ] 00:06:42.693 [2024-11-27 00:29:19.421696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.693 [2024-11-27 00:29:19.446015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71168 ']' 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71168 00:06:43.633 killing process with pid 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71168' 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71168 00:06:43.633 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71168 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71168 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71168 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:43.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71168 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71168 ']' 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.893 ERROR: process (pid: 71168) is no longer running 00:06:43.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71168) - No such process 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.893 00:06:43.893 real 0m1.377s 00:06:43.893 user 0m1.370s 00:06:43.893 sys 0m0.413s 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.893 ************************************ 00:06:43.893 00:29:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.893 END TEST default_locks 00:06:43.893 ************************************ 00:06:43.893 00:29:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.893 00:29:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.893 00:29:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.893 00:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.893 ************************************ 00:06:43.893 START TEST default_locks_via_rpc 00:06:43.893 ************************************ 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71210 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71210 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71210 ']' 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.893 00:29:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.155 [2024-11-27 00:29:20.699111] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:44.155 [2024-11-27 00:29:20.699238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71210 ] 00:06:44.155 [2024-11-27 00:29:20.856566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.155 [2024-11-27 00:29:20.880274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71210 ']' 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71210 00:06:45.097 killing process with pid 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71210' 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71210 00:06:45.097 00:29:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71210 00:06:45.357 00:06:45.357 real 0m1.465s 00:06:45.357 user 0m1.450s 00:06:45.357 sys 0m0.456s 00:06:45.357 00:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.357 ************************************ 00:06:45.357 END TEST default_locks_via_rpc 00:06:45.357 ************************************ 00:06:45.357 00:29:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.357 00:29:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.357 00:29:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.357 00:29:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.357 00:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.357 ************************************ 00:06:45.357 START TEST non_locking_app_on_locked_coremask 00:06:45.357 ************************************ 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71262 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71262 /var/tmp/spdk.sock 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71262 ']' 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.357 00:29:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.617 [2024-11-27 00:29:22.212395] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:45.617 [2024-11-27 00:29:22.212510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:06:45.617 [2024-11-27 00:29:22.368643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.617 [2024-11-27 00:29:22.393112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.557 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.557 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71278 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71278 /var/tmp/spdk2.sock 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71278 ']' 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.558 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.558 [2024-11-27 00:29:23.109309] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:46.558 [2024-11-27 00:29:23.109577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71278 ] 00:06:46.558 [2024-11-27 00:29:23.280111] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.558 [2024-11-27 00:29:23.280169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.558 [2024-11-27 00:29:23.328709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.500 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.500 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.500 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71262 00:06:47.500 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71262 00:06:47.500 00:29:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71262 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71262 ']' 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71262 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71262 00:06:47.500 killing process with pid 71262 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71262' 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71262 00:06:47.500 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71262 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71278 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71278 ']' 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71278 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71278 00:06:48.066 killing process with pid 71278 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71278' 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71278 00:06:48.066 00:29:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71278 00:06:48.633 ************************************ 00:06:48.633 END TEST non_locking_app_on_locked_coremask 00:06:48.633 00:06:48.633 real 0m2.988s 00:06:48.633 user 0m3.227s 00:06:48.633 sys 0m0.822s 00:06:48.633 00:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.633 00:29:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.633 ************************************ 00:06:48.633 00:29:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.633 00:29:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.633 00:29:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.633 00:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.633 ************************************ 00:06:48.633 START TEST locking_app_on_unlocked_coremask 00:06:48.633 ************************************ 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71336 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71336 /var/tmp/spdk.sock 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71336 ']' 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.633 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.634 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.634 00:29:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.634 [2024-11-27 00:29:25.250234] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:48.634 [2024-11-27 00:29:25.250360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71336 ] 00:06:48.634 [2024-11-27 00:29:25.403940] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.634 [2024-11-27 00:29:25.403989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.893 [2024-11-27 00:29:25.426457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71352 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71352 /var/tmp/spdk2.sock 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71352 ']' 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.460 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.460 [2024-11-27 00:29:26.146570] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:49.460 [2024-11-27 00:29:26.146689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:06:49.718 [2024-11-27 00:29:26.307470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.718 [2024-11-27 00:29:26.354292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.284 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.284 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.284 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71352 00:06:50.284 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71352 00:06:50.284 00:29:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71336 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71336 ']' 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71336 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71336 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.594 killing process with pid 71336 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71336' 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71336 00:06:50.594 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71336 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71352 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71352 ']' 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71352 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71352 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.160 killing process with pid 71352 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71352' 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71352 00:06:51.160 00:29:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71352 00:06:51.418 00:06:51.418 real 0m3.031s 00:06:51.418 user 0m3.267s 00:06:51.418 sys 0m0.864s 00:06:51.418 00:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 ************************************ 00:06:51.676 END TEST locking_app_on_unlocked_coremask 00:06:51.676 ************************************ 00:06:51.676 00:29:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.676 00:29:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.676 00:29:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.676 00:29:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 ************************************ 00:06:51.676 START TEST locking_app_on_locked_coremask 00:06:51.676 ************************************ 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71410 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71410 /var/tmp/spdk.sock 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71410 ']' 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.676 00:29:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.676 [2024-11-27 00:29:28.326148] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:51.676 [2024-11-27 00:29:28.326264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71410 ] 00:06:51.934 [2024-11-27 00:29:28.478548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.934 [2024-11-27 00:29:28.500153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71425 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71425 /var/tmp/spdk2.sock 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71425 /var/tmp/spdk2.sock 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71425 /var/tmp/spdk2.sock 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71425 ']' 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.499 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.499 [2024-11-27 00:29:29.180021] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:52.499 [2024-11-27 00:29:29.180131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71425 ] 00:06:52.756 [2024-11-27 00:29:29.340496] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71410 has claimed it. 00:06:52.756 [2024-11-27 00:29:29.340549] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.323 ERROR: process (pid: 71425) is no longer running 00:06:53.323 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71425) - No such process 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71410 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.323 00:29:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71410 ']' 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.323 killing process with pid 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71410' 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71410 00:06:53.323 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71410 00:06:53.583 00:06:53.583 real 0m2.092s 00:06:53.583 user 0m2.255s 00:06:53.583 sys 0m0.537s 00:06:53.583 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.583 00:29:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.583 ************************************ 00:06:53.583 END TEST locking_app_on_locked_coremask 00:06:53.583 ************************************ 00:06:53.842 00:29:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.842 00:29:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.842 00:29:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.842 00:29:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.842 ************************************ 00:06:53.842 START TEST locking_overlapped_coremask 00:06:53.842 ************************************ 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71468 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71468 /var/tmp/spdk.sock 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71468 ']' 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.842 00:29:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.842 [2024-11-27 00:29:30.466845] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:53.842 [2024-11-27 00:29:30.466964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71468 ] 00:06:53.842 [2024-11-27 00:29:30.620872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.101 [2024-11-27 00:29:30.645591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.101 [2024-11-27 00:29:30.645911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.101 [2024-11-27 00:29:30.645980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71486 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71486 /var/tmp/spdk2.sock 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71486 /var/tmp/spdk2.sock 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71486 /var/tmp/spdk2.sock 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71486 ']' 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.669 00:29:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.669 [2024-11-27 00:29:31.380774] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:54.669 [2024-11-27 00:29:31.380916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71486 ] 00:06:54.928 [2024-11-27 00:29:31.554665] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71468 has claimed it. 00:06:54.928 [2024-11-27 00:29:31.554728] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.494 ERROR: process (pid: 71486) is no longer running 00:06:55.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71486) - No such process 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71468 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71468 ']' 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71468 00:06:55.494 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71468 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.495 killing process with pid 71468 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71468' 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71468 00:06:55.495 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71468 00:06:55.753 00:06:55.753 real 0m1.947s 00:06:55.753 user 0m5.374s 00:06:55.753 sys 0m0.431s 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.753 END TEST locking_overlapped_coremask 00:06:55.753 ************************************ 00:06:55.753 00:29:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.753 00:29:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.753 00:29:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.753 00:29:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.753 START TEST locking_overlapped_coremask_via_rpc 00:06:55.753 ************************************ 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71528 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71528 /var/tmp/spdk.sock 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71528 ']' 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.753 00:29:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 [2024-11-27 00:29:32.463719] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:55.753 [2024-11-27 00:29:32.463849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71528 ] 00:06:56.012 [2024-11-27 00:29:32.624130] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.012 [2024-11-27 00:29:32.624188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.012 [2024-11-27 00:29:32.651271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.012 [2024-11-27 00:29:32.651447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.012 [2024-11-27 00:29:32.651521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71546 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71546 /var/tmp/spdk2.sock 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71546 ']' 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.578 00:29:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.837 [2024-11-27 00:29:33.392323] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:56.837 [2024-11-27 00:29:33.392451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71546 ] 00:06:56.837 [2024-11-27 00:29:33.565534] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.837 [2024-11-27 00:29:33.565586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.837 [2024-11-27 00:29:33.612224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.837 [2024-11-27 00:29:33.612301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.837 [2024-11-27 00:29:33.612230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.778 [2024-11-27 00:29:34.253038] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71528 has claimed it. 00:06:57.778 request: 00:06:57.778 { 00:06:57.778 "method": "framework_enable_cpumask_locks", 00:06:57.778 "req_id": 1 00:06:57.778 } 00:06:57.778 Got JSON-RPC error response 00:06:57.778 response: 00:06:57.778 { 00:06:57.778 "code": -32603, 00:06:57.778 "message": "Failed to claim CPU core: 2" 00:06:57.778 } 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71528 /var/tmp/spdk.sock 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71528 ']' 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71546 /var/tmp/spdk2.sock 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71546 ']' 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.778 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.779 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.779 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.779 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.038 00:06:58.038 real 0m2.300s 00:06:58.038 user 0m1.065s 00:06:58.038 sys 0m0.146s 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.038 00:29:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.038 ************************************ 00:06:58.038 END TEST locking_overlapped_coremask_via_rpc 00:06:58.038 ************************************ 00:06:58.038 00:29:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.038 00:29:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71528 ]] 00:06:58.038 00:29:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71528 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71528 ']' 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71528 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71528 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71528' 00:06:58.038 killing process with pid 71528 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71528 00:06:58.038 00:29:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71528 00:06:58.297 00:29:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71546 ]] 00:06:58.297 00:29:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71546 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71546 ']' 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71546 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71546 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:58.297 killing process with pid 71546 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71546' 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71546 00:06:58.297 00:29:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71546 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71528 ]] 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71528 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71528 ']' 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71528 00:06:58.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71528) - No such process 00:06:58.557 Process with pid 71528 is not found 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71528 is not found' 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71546 ]] 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71546 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71546 ']' 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71546 00:06:58.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71546) - No such process 00:06:58.557 Process with pid 71546 is not found 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71546 is not found' 00:06:58.557 00:29:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.557 00:06:58.557 real 0m16.294s 00:06:58.557 user 0m28.547s 00:06:58.557 sys 0m4.488s 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.557 00:29:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.557 ************************************ 00:06:58.557 END TEST cpu_locks 00:06:58.557 ************************************ 00:06:58.817 ************************************ 00:06:58.817 END TEST event 00:06:58.817 ************************************ 00:06:58.817 00:06:58.817 real 0m41.812s 00:06:58.817 user 1m20.658s 00:06:58.817 sys 0m7.466s 00:06:58.817 00:29:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.817 00:29:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.817 00:29:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.817 00:29:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.817 00:29:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.817 00:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.817 ************************************ 00:06:58.817 START TEST thread 00:06:58.817 ************************************ 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:58.817 * Looking for test storage... 00:06:58.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.817 00:29:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.817 00:29:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.817 00:29:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.817 00:29:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.817 00:29:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.817 00:29:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.817 00:29:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.817 00:29:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.817 00:29:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.817 00:29:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.817 00:29:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.817 00:29:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.817 00:29:35 thread -- scripts/common.sh@345 -- # : 1 00:06:58.817 00:29:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.817 00:29:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.817 00:29:35 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.817 00:29:35 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.817 00:29:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.817 00:29:35 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.817 00:29:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.817 00:29:35 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.817 00:29:35 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.817 00:29:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.817 00:29:35 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.817 00:29:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.817 00:29:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.817 00:29:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.817 00:29:35 thread -- scripts/common.sh@368 -- # return 0 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.817 --rc genhtml_branch_coverage=1 00:06:58.817 --rc genhtml_function_coverage=1 00:06:58.817 --rc genhtml_legend=1 00:06:58.817 --rc geninfo_all_blocks=1 00:06:58.817 --rc geninfo_unexecuted_blocks=1 00:06:58.817 00:06:58.817 ' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.817 --rc genhtml_branch_coverage=1 00:06:58.817 --rc genhtml_function_coverage=1 00:06:58.817 --rc genhtml_legend=1 00:06:58.817 --rc geninfo_all_blocks=1 00:06:58.817 --rc geninfo_unexecuted_blocks=1 00:06:58.817 00:06:58.817 ' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.817 --rc genhtml_branch_coverage=1 00:06:58.817 --rc genhtml_function_coverage=1 00:06:58.817 --rc genhtml_legend=1 00:06:58.817 --rc geninfo_all_blocks=1 00:06:58.817 --rc geninfo_unexecuted_blocks=1 00:06:58.817 00:06:58.817 ' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.817 --rc genhtml_branch_coverage=1 00:06:58.817 --rc genhtml_function_coverage=1 00:06:58.817 --rc genhtml_legend=1 00:06:58.817 --rc geninfo_all_blocks=1 00:06:58.817 --rc geninfo_unexecuted_blocks=1 00:06:58.817 00:06:58.817 ' 00:06:58.817 00:29:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.817 00:29:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.817 ************************************ 00:06:58.817 START TEST thread_poller_perf 00:06:58.817 ************************************ 00:06:58.817 00:29:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.817 [2024-11-27 00:29:35.588823] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:06:58.817 [2024-11-27 00:29:35.588948] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:06:59.075 [2024-11-27 00:29:35.746061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.075 [2024-11-27 00:29:35.766546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.075 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.457 [2024-11-27T00:29:37.244Z] ====================================== 00:07:00.457 [2024-11-27T00:29:37.244Z] busy:2613596008 (cyc) 00:07:00.457 [2024-11-27T00:29:37.244Z] total_run_count: 293000 00:07:00.457 [2024-11-27T00:29:37.244Z] tsc_hz: 2600000000 (cyc) 00:07:00.457 [2024-11-27T00:29:37.244Z] ====================================== 00:07:00.457 [2024-11-27T00:29:37.244Z] poller_cost: 8920 (cyc), 3430 (nsec) 00:07:00.457 00:07:00.457 real 0m1.259s 00:07:00.457 user 0m1.085s 00:07:00.457 sys 0m0.067s 00:07:00.457 00:29:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.457 00:29:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 END TEST thread_poller_perf 00:07:00.457 ************************************ 00:07:00.457 00:29:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.457 00:29:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:00.457 00:29:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.457 00:29:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 START TEST thread_poller_perf 00:07:00.457 ************************************ 00:07:00.457 00:29:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.457 [2024-11-27 00:29:36.890924] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:00.457 [2024-11-27 00:29:36.891029] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71704 ] 00:07:00.457 [2024-11-27 00:29:37.045252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.457 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.457 [2024-11-27 00:29:37.063335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.401 [2024-11-27T00:29:38.188Z] ====================================== 00:07:01.401 [2024-11-27T00:29:38.188Z] busy:2603134438 (cyc) 00:07:01.401 [2024-11-27T00:29:38.188Z] total_run_count: 3971000 00:07:01.401 [2024-11-27T00:29:38.188Z] tsc_hz: 2600000000 (cyc) 00:07:01.401 [2024-11-27T00:29:38.188Z] ====================================== 00:07:01.401 [2024-11-27T00:29:38.188Z] poller_cost: 655 (cyc), 251 (nsec) 00:07:01.401 00:07:01.401 real 0m1.256s 00:07:01.401 user 0m1.084s 00:07:01.401 sys 0m0.065s 00:07:01.401 00:29:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.401 00:29:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.401 ************************************ 00:07:01.401 END TEST thread_poller_perf 00:07:01.401 ************************************ 00:07:01.401 00:29:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.401 00:07:01.401 real 0m2.749s 00:07:01.401 user 0m2.273s 00:07:01.401 sys 0m0.266s 00:07:01.401 ************************************ 00:07:01.401 END TEST thread 00:07:01.401 ************************************ 00:07:01.401 00:29:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.401 00:29:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.663 00:29:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.663 00:29:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.663 00:29:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.663 00:29:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.663 00:29:38 -- common/autotest_common.sh@10 -- # set +x 00:07:01.663 ************************************ 00:07:01.663 START TEST app_cmdline 00:07:01.663 ************************************ 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.663 * Looking for test storage... 00:07:01.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.663 00:29:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.663 00:29:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.664 --rc genhtml_branch_coverage=1 00:07:01.664 --rc genhtml_function_coverage=1 00:07:01.664 --rc genhtml_legend=1 00:07:01.664 --rc geninfo_all_blocks=1 00:07:01.664 --rc geninfo_unexecuted_blocks=1 00:07:01.664 00:07:01.664 ' 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.664 --rc genhtml_branch_coverage=1 00:07:01.664 --rc genhtml_function_coverage=1 00:07:01.664 --rc genhtml_legend=1 00:07:01.664 --rc geninfo_all_blocks=1 00:07:01.664 --rc geninfo_unexecuted_blocks=1 00:07:01.664 00:07:01.664 ' 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.664 --rc genhtml_branch_coverage=1 00:07:01.664 --rc genhtml_function_coverage=1 00:07:01.664 --rc genhtml_legend=1 00:07:01.664 --rc geninfo_all_blocks=1 00:07:01.664 --rc geninfo_unexecuted_blocks=1 00:07:01.664 00:07:01.664 ' 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.664 --rc genhtml_branch_coverage=1 00:07:01.664 --rc genhtml_function_coverage=1 00:07:01.664 --rc genhtml_legend=1 00:07:01.664 --rc geninfo_all_blocks=1 00:07:01.664 --rc geninfo_unexecuted_blocks=1 00:07:01.664 00:07:01.664 ' 00:07:01.664 00:29:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.664 00:29:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71793 00:07:01.664 00:29:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71793 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71793 ']' 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.664 00:29:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.664 00:29:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.664 [2024-11-27 00:29:38.418207] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:01.664 [2024-11-27 00:29:38.418318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71793 ] 00:07:01.925 [2024-11-27 00:29:38.576281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.925 [2024-11-27 00:29:38.600767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.499 00:29:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.499 00:29:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:02.499 00:29:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:02.762 { 00:07:02.762 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:07:02.762 "fields": { 00:07:02.762 "major": 25, 00:07:02.762 "minor": 1, 00:07:02.762 "patch": 0, 00:07:02.762 "suffix": "-pre", 00:07:02.762 "commit": "2f2acf4eb" 00:07:02.762 } 00:07:02.762 } 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.762 00:29:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.762 00:29:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.763 00:29:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.763 00:29:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.763 00:29:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.763 00:29:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.023 request: 00:07:03.023 { 00:07:03.023 "method": "env_dpdk_get_mem_stats", 00:07:03.023 "req_id": 1 00:07:03.023 } 00:07:03.023 Got JSON-RPC error response 00:07:03.023 response: 00:07:03.023 { 00:07:03.023 "code": -32601, 00:07:03.023 "message": "Method not found" 00:07:03.023 } 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.023 00:29:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71793 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71793 ']' 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71793 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71793 00:07:03.023 killing process with pid 71793 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71793' 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 71793 00:07:03.023 00:29:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 71793 00:07:03.283 ************************************ 00:07:03.283 END TEST app_cmdline 00:07:03.283 ************************************ 00:07:03.283 00:07:03.283 real 0m1.855s 00:07:03.283 user 0m2.176s 00:07:03.283 sys 0m0.432s 00:07:03.283 00:29:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.283 00:29:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.542 00:29:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:03.542 00:29:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.542 00:29:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.542 00:29:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.542 ************************************ 00:07:03.542 START TEST version 00:07:03.542 ************************************ 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:03.542 * Looking for test storage... 00:07:03.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.542 00:29:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.542 00:29:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.542 00:29:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.542 00:29:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.542 00:29:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.542 00:29:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.542 00:29:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.542 00:29:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.542 00:29:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.542 00:29:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.542 00:29:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.542 00:29:40 version -- scripts/common.sh@344 -- # case "$op" in 00:07:03.542 00:29:40 version -- scripts/common.sh@345 -- # : 1 00:07:03.542 00:29:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.542 00:29:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.542 00:29:40 version -- scripts/common.sh@365 -- # decimal 1 00:07:03.542 00:29:40 version -- scripts/common.sh@353 -- # local d=1 00:07:03.542 00:29:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.542 00:29:40 version -- scripts/common.sh@355 -- # echo 1 00:07:03.542 00:29:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.542 00:29:40 version -- scripts/common.sh@366 -- # decimal 2 00:07:03.542 00:29:40 version -- scripts/common.sh@353 -- # local d=2 00:07:03.542 00:29:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.542 00:29:40 version -- scripts/common.sh@355 -- # echo 2 00:07:03.542 00:29:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.542 00:29:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.542 00:29:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.542 00:29:40 version -- scripts/common.sh@368 -- # return 0 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.542 --rc genhtml_branch_coverage=1 00:07:03.542 --rc genhtml_function_coverage=1 00:07:03.542 --rc genhtml_legend=1 00:07:03.542 --rc geninfo_all_blocks=1 00:07:03.542 --rc geninfo_unexecuted_blocks=1 00:07:03.542 00:07:03.542 ' 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.542 --rc genhtml_branch_coverage=1 00:07:03.542 --rc genhtml_function_coverage=1 00:07:03.542 --rc genhtml_legend=1 00:07:03.542 --rc geninfo_all_blocks=1 00:07:03.542 --rc geninfo_unexecuted_blocks=1 00:07:03.542 00:07:03.542 ' 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.542 --rc genhtml_branch_coverage=1 00:07:03.542 --rc genhtml_function_coverage=1 00:07:03.542 --rc genhtml_legend=1 00:07:03.542 --rc geninfo_all_blocks=1 00:07:03.542 --rc geninfo_unexecuted_blocks=1 00:07:03.542 00:07:03.542 ' 00:07:03.542 00:29:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.542 --rc genhtml_branch_coverage=1 00:07:03.542 --rc genhtml_function_coverage=1 00:07:03.542 --rc genhtml_legend=1 00:07:03.542 --rc geninfo_all_blocks=1 00:07:03.542 --rc geninfo_unexecuted_blocks=1 00:07:03.542 00:07:03.542 ' 00:07:03.542 00:29:40 version -- app/version.sh@17 -- # get_header_version major 00:07:03.542 00:29:40 version -- app/version.sh@14 -- # cut -f2 00:07:03.542 00:29:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.542 00:29:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.542 00:29:40 version -- app/version.sh@17 -- # major=25 00:07:03.542 00:29:40 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.542 00:29:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.542 00:29:40 version -- app/version.sh@14 -- # cut -f2 00:07:03.542 00:29:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.542 00:29:40 version -- app/version.sh@18 -- # minor=1 00:07:03.542 00:29:40 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.542 00:29:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.542 00:29:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.543 00:29:40 version -- app/version.sh@14 -- # cut -f2 00:07:03.543 00:29:40 version -- app/version.sh@19 -- # patch=0 00:07:03.543 00:29:40 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.543 00:29:40 version -- app/version.sh@14 -- # cut -f2 00:07:03.543 00:29:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:03.543 00:29:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.543 00:29:40 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.543 00:29:40 version -- app/version.sh@22 -- # version=25.1 00:07:03.543 00:29:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.543 00:29:40 version -- app/version.sh@28 -- # version=25.1rc0 00:07:03.543 00:29:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:03.543 00:29:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.543 00:29:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:03.543 00:29:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:03.543 00:07:03.543 real 0m0.207s 00:07:03.543 user 0m0.140s 00:07:03.543 sys 0m0.092s 00:07:03.543 ************************************ 00:07:03.543 END TEST version 00:07:03.543 ************************************ 00:07:03.543 00:29:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.543 00:29:40 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.801 00:29:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:03.801 00:29:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:03.801 00:29:40 -- spdk/autotest.sh@194 -- # uname -s 00:07:03.801 00:29:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:03.801 00:29:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.801 00:29:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:03.801 00:29:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:03.801 00:29:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.801 00:29:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.801 00:29:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.801 00:29:40 -- common/autotest_common.sh@10 -- # set +x 00:07:03.801 ************************************ 00:07:03.801 START TEST blockdev_nvme 00:07:03.801 ************************************ 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:03.801 * Looking for test storage... 00:07:03.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.801 00:29:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.801 --rc genhtml_branch_coverage=1 00:07:03.801 --rc genhtml_function_coverage=1 00:07:03.801 --rc genhtml_legend=1 00:07:03.801 --rc geninfo_all_blocks=1 00:07:03.801 --rc geninfo_unexecuted_blocks=1 00:07:03.801 00:07:03.801 ' 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.801 --rc genhtml_branch_coverage=1 00:07:03.801 --rc genhtml_function_coverage=1 00:07:03.801 --rc genhtml_legend=1 00:07:03.801 --rc geninfo_all_blocks=1 00:07:03.801 --rc geninfo_unexecuted_blocks=1 00:07:03.801 00:07:03.801 ' 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.801 --rc genhtml_branch_coverage=1 00:07:03.801 --rc genhtml_function_coverage=1 00:07:03.801 --rc genhtml_legend=1 00:07:03.801 --rc geninfo_all_blocks=1 00:07:03.801 --rc geninfo_unexecuted_blocks=1 00:07:03.801 00:07:03.801 ' 00:07:03.801 00:29:40 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.801 --rc genhtml_branch_coverage=1 00:07:03.801 --rc genhtml_function_coverage=1 00:07:03.801 --rc genhtml_legend=1 00:07:03.801 --rc geninfo_all_blocks=1 00:07:03.801 --rc geninfo_unexecuted_blocks=1 00:07:03.801 00:07:03.801 ' 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.801 00:29:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:03.801 00:29:40 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71954 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71954 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71954 ']' 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.802 00:29:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.802 00:29:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.802 [2024-11-27 00:29:40.565956] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:03.802 [2024-11-27 00:29:40.566069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71954 ] 00:07:04.060 [2024-11-27 00:29:40.728969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.060 [2024-11-27 00:29:40.752820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.628 00:29:41 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.628 00:29:41 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:04.628 00:29:41 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:04.628 00:29:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:04.628 00:29:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:04.628 00:29:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:04.628 00:29:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:04.887 00:29:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:04.887 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.887 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.148 00:29:41 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.148 00:29:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c9608c90-4981-4884-b98e-1ef47a131ee5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c9608c90-4981-4884-b98e-1ef47a131ee5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "dc20e5e7-2de6-4352-848a-d8503b8be96b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dc20e5e7-2de6-4352-848a-d8503b8be96b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4b447ed6-7406-47a3-b0c5-7ab984c0924f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b447ed6-7406-47a3-b0c5-7ab984c0924f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3a5a1d33-09c2-4868-84cc-8496823e0737"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3a5a1d33-09c2-4868-84cc-8496823e0737",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d4004ab4-6ff6-4e17-91a9-15c970b7833c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d4004ab4-6ff6-4e17-91a9-15c970b7833c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c46f832d-dd35-4d54-b4e9-bbd73fc75f24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c46f832d-dd35-4d54-b4e9-bbd73fc75f24",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:05.149 00:29:41 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71954 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71954 ']' 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71954 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71954 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.149 killing process with pid 71954 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71954' 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71954 00:07:05.149 00:29:41 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71954 00:07:05.716 00:29:42 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.716 00:29:42 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.716 00:29:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:05.716 00:29:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.716 00:29:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.716 ************************************ 00:07:05.716 START TEST bdev_hello_world 00:07:05.716 ************************************ 00:07:05.716 00:29:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.716 [2024-11-27 00:29:42.296821] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:05.716 [2024-11-27 00:29:42.296954] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72016 ] 00:07:05.716 [2024-11-27 00:29:42.452382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.716 [2024-11-27 00:29:42.475515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.285 [2024-11-27 00:29:42.859062] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:06.285 [2024-11-27 00:29:42.859113] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:06.285 [2024-11-27 00:29:42.859134] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:06.285 [2024-11-27 00:29:42.861345] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:06.285 [2024-11-27 00:29:42.861897] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:06.285 [2024-11-27 00:29:42.861926] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:06.285 [2024-11-27 00:29:42.862180] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:06.285 00:07:06.285 [2024-11-27 00:29:42.862203] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:06.285 00:07:06.285 real 0m0.802s 00:07:06.285 user 0m0.525s 00:07:06.285 sys 0m0.175s 00:07:06.285 00:29:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.285 00:29:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:06.285 ************************************ 00:07:06.285 END TEST bdev_hello_world 00:07:06.285 ************************************ 00:07:06.544 00:29:43 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:06.544 00:29:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.544 00:29:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.544 00:29:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.544 ************************************ 00:07:06.544 START TEST bdev_bounds 00:07:06.544 ************************************ 00:07:06.544 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:06.544 00:29:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72047 00:07:06.544 00:29:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.544 Process bdevio pid: 72047 00:07:06.544 00:29:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72047' 00:07:06.544 00:29:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72047 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72047 ']' 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.545 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:06.545 [2024-11-27 00:29:43.155719] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:06.545 [2024-11-27 00:29:43.155847] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72047 ] 00:07:06.545 [2024-11-27 00:29:43.315130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.804 [2024-11-27 00:29:43.343477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.804 [2024-11-27 00:29:43.343874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.804 [2024-11-27 00:29:43.343911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.376 00:29:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.376 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:07.376 00:29:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:07.376 I/O targets: 00:07:07.376 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:07.376 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:07.376 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.376 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.376 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.376 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:07.376 00:07:07.376 00:07:07.376 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.376 http://cunit.sourceforge.net/ 00:07:07.376 00:07:07.376 00:07:07.376 Suite: bdevio tests on: Nvme3n1 00:07:07.376 Test: blockdev write read block ...passed 00:07:07.376 Test: blockdev write zeroes read block ...passed 00:07:07.376 Test: blockdev write zeroes read no split ...passed 00:07:07.376 Test: blockdev write zeroes read split ...passed 00:07:07.376 Test: blockdev write zeroes read split partial ...passed 00:07:07.376 Test: blockdev reset ...[2024-11-27 00:29:44.099282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:07.376 [2024-11-27 00:29:44.101239] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:07.376 passed 00:07:07.376 Test: blockdev write read 8 blocks ...passed 00:07:07.376 Test: blockdev write read size > 128k ...passed 00:07:07.376 Test: blockdev write read invalid size ...passed 00:07:07.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.376 Test: blockdev write read max offset ...passed 00:07:07.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.376 Test: blockdev writev readv 8 blocks ...passed 00:07:07.376 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.376 Test: blockdev writev readv block ...passed 00:07:07.376 Test: blockdev writev readv size > 128k ...passed 00:07:07.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.376 Test: blockdev comparev and writev ...[2024-11-27 00:29:44.109876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca60e000 len:0x1000 00:07:07.376 [2024-11-27 00:29:44.109951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.376 passed 00:07:07.376 Test: blockdev nvme passthru rw ...passed 00:07:07.376 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.376 Test: blockdev nvme admin passthru ...[2024-11-27 00:29:44.111384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.376 [2024-11-27 00:29:44.111415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.376 passed 00:07:07.376 Test: blockdev copy ...passed 00:07:07.376 Suite: bdevio tests on: Nvme2n3 00:07:07.376 Test: blockdev write read block ...passed 00:07:07.376 Test: blockdev write zeroes read block ...passed 00:07:07.376 Test: blockdev write zeroes read no split ...passed 00:07:07.376 Test: blockdev write zeroes read split ...passed 00:07:07.376 Test: blockdev write zeroes read split partial ...passed 00:07:07.376 Test: blockdev reset ...[2024-11-27 00:29:44.137699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:07.376 passed 00:07:07.376 Test: blockdev write read 8 blocks ...[2024-11-27 00:29:44.141334] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:07.376 passed 00:07:07.376 Test: blockdev write read size > 128k ...passed 00:07:07.376 Test: blockdev write read invalid size ...passed 00:07:07.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.376 Test: blockdev write read max offset ...passed 00:07:07.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.376 Test: blockdev writev readv 8 blocks ...passed 00:07:07.376 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.376 Test: blockdev writev readv block ...passed 00:07:07.376 Test: blockdev writev readv size > 128k ...passed 00:07:07.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.376 Test: blockdev comparev and writev ...[2024-11-27 00:29:44.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:07.376 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca606000 len:0x1000 00:07:07.376 [2024-11-27 00:29:44.153317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.376 passed 00:07:07.376 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.376 Test: blockdev nvme admin passthru ...[2024-11-27 00:29:44.155051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.376 [2024-11-27 00:29:44.155084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.376 passed 00:07:07.376 Test: blockdev copy ...passed 00:07:07.376 Suite: bdevio tests on: Nvme2n2 00:07:07.637 Test: blockdev write read block ...passed 00:07:07.637 Test: blockdev write zeroes read block ...passed 00:07:07.637 Test: blockdev write zeroes read no split ...passed 00:07:07.637 Test: blockdev write zeroes read split ...passed 00:07:07.637 Test: blockdev write zeroes read split partial ...passed 00:07:07.637 Test: blockdev reset ...[2024-11-27 00:29:44.176572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:07.637 [2024-11-27 00:29:44.180236] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:07.637 Test: blockdev write read 8 blocks ...uccessful. 00:07:07.637 passed 00:07:07.637 Test: blockdev write read size > 128k ...passed 00:07:07.637 Test: blockdev write read invalid size ...passed 00:07:07.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.637 Test: blockdev write read max offset ...passed 00:07:07.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.637 Test: blockdev writev readv 8 blocks ...passed 00:07:07.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.637 Test: blockdev writev readv block ...passed 00:07:07.637 Test: blockdev writev readv size > 128k ...passed 00:07:07.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.637 Test: blockdev comparev and writev ...[2024-11-27 00:29:44.193196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca608000 len:0x1000 00:07:07.637 [2024-11-27 00:29:44.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.637 passed 00:07:07.637 Test: blockdev nvme passthru rw ...passed 00:07:07.637 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.637 Test: blockdev nvme admin passthru ...[2024-11-27 00:29:44.195111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.637 [2024-11-27 00:29:44.195141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.637 passed 00:07:07.637 Test: blockdev copy ...passed 00:07:07.637 Suite: bdevio tests on: Nvme2n1 00:07:07.637 Test: blockdev write read block ...passed 00:07:07.637 Test: blockdev write zeroes read block ...passed 00:07:07.637 Test: blockdev write zeroes read no split ...passed 00:07:07.637 Test: blockdev write zeroes read split ...passed 00:07:07.637 Test: blockdev write zeroes read split partial ...passed 00:07:07.637 Test: blockdev reset ...[2024-11-27 00:29:44.213240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:07.637 passed 00:07:07.637 Test: blockdev write read 8 blocks ...[2024-11-27 00:29:44.216154] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:07.637 passed 00:07:07.637 Test: blockdev write read size > 128k ...passed 00:07:07.637 Test: blockdev write read invalid size ...passed 00:07:07.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.637 Test: blockdev write read max offset ...passed 00:07:07.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.637 Test: blockdev writev readv 8 blocks ...passed 00:07:07.637 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.637 Test: blockdev writev readv block ...passed 00:07:07.637 Test: blockdev writev readv size > 128k ...passed 00:07:07.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.637 Test: blockdev comparev and writev ...[2024-11-27 00:29:44.227253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca204000 len:0x1000 00:07:07.637 [2024-11-27 00:29:44.227322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.637 passed 00:07:07.637 Test: blockdev nvme passthru rw ...passed 00:07:07.638 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.638 Test: blockdev nvme admin passthru ...[2024-11-27 00:29:44.228844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.638 [2024-11-27 00:29:44.228883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.638 passed 00:07:07.638 Test: blockdev copy ...passed 00:07:07.638 Suite: bdevio tests on: Nvme1n1 00:07:07.638 Test: blockdev write read block ...passed 00:07:07.638 Test: blockdev write zeroes read block ...passed 00:07:07.638 Test: blockdev write zeroes read no split ...passed 00:07:07.638 Test: blockdev write zeroes read split ...passed 00:07:07.638 Test: blockdev write zeroes read split partial ...passed 00:07:07.638 Test: blockdev reset ...[2024-11-27 00:29:44.250387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:07.638 passed 00:07:07.638 Test: blockdev write read 8 blocks ...[2024-11-27 00:29:44.253289] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:07.638 passed 00:07:07.638 Test: blockdev write read size > 128k ...passed 00:07:07.638 Test: blockdev write read invalid size ...passed 00:07:07.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.638 Test: blockdev write read max offset ...passed 00:07:07.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.638 Test: blockdev writev readv 8 blocks ...passed 00:07:07.638 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.638 Test: blockdev writev readv block ...passed 00:07:07.638 Test: blockdev writev readv size > 128k ...passed 00:07:07.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.638 Test: blockdev comparev and writev ...[2024-11-27 00:29:44.266641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e163d000 len:0x1000 00:07:07.638 [2024-11-27 00:29:44.266694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.638 passed 00:07:07.638 Test: blockdev nvme passthru rw ...passed 00:07:07.638 Test: blockdev nvme passthru vendor specific ...[2024-11-27 00:29:44.268262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:07.638 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:07.638 [2024-11-27 00:29:44.268471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.638 passed 00:07:07.638 Test: blockdev copy ...passed 00:07:07.638 Suite: bdevio tests on: Nvme0n1 00:07:07.638 Test: blockdev write read block ...passed 00:07:07.638 Test: blockdev write zeroes read block ...passed 00:07:07.638 Test: blockdev write zeroes read no split ...passed 00:07:07.638 Test: blockdev write zeroes read split ...passed 00:07:07.638 Test: blockdev write zeroes read split partial ...passed 00:07:07.638 Test: blockdev reset ...[2024-11-27 00:29:44.289041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:07.638 passed 00:07:07.638 Test: blockdev write read 8 blocks ...[2024-11-27 00:29:44.291970] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:07.638 passed 00:07:07.638 Test: blockdev write read size > 128k ...passed 00:07:07.638 Test: blockdev write read invalid size ...passed 00:07:07.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.638 Test: blockdev write read max offset ...passed 00:07:07.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.638 Test: blockdev writev readv 8 blocks ...passed 00:07:07.638 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.638 Test: blockdev writev readv block ...passed 00:07:07.638 Test: blockdev writev readv size > 128k ...passed 00:07:07.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.638 Test: blockdev comparev and writev ...passed 00:07:07.638 Test: blockdev nvme passthru rw ...[2024-11-27 00:29:44.303301] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:07.638 separate metadata which is not supported yet. 00:07:07.638 passed 00:07:07.638 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.638 Test: blockdev nvme admin passthru ...[2024-11-27 00:29:44.304359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:07.638 [2024-11-27 00:29:44.304415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:07.638 passed 00:07:07.638 Test: blockdev copy ...passed 00:07:07.638 00:07:07.638 Run Summary: Type Total Ran Passed Failed Inactive 00:07:07.638 suites 6 6 n/a 0 0 00:07:07.638 tests 138 138 138 0 0 00:07:07.638 asserts 893 893 893 0 n/a 00:07:07.638 00:07:07.638 Elapsed time = 0.500 seconds 00:07:07.638 0 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72047 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72047 ']' 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72047 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72047 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72047' 00:07:07.638 killing process with pid 72047 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72047 00:07:07.638 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72047 00:07:07.897 00:29:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:07.897 00:07:07.897 real 0m1.436s 00:07:07.897 user 0m3.580s 00:07:07.898 sys 0m0.295s 00:07:07.898 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.898 ************************************ 00:07:07.898 END TEST bdev_bounds 00:07:07.898 ************************************ 00:07:07.898 00:29:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.898 00:29:44 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.898 00:29:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.898 00:29:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.898 00:29:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.898 ************************************ 00:07:07.898 START TEST bdev_nbd 00:07:07.898 ************************************ 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72101 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72101 /var/tmp/spdk-nbd.sock 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72101 ']' 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.898 00:29:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.898 [2024-11-27 00:29:44.660124] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:07.898 [2024-11-27 00:29:44.660374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.157 [2024-11-27 00:29:44.821080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.157 [2024-11-27 00:29:44.848312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:08.729 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.990 1+0 records in 00:07:08.990 1+0 records out 00:07:08.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563938 s, 7.3 MB/s 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:08.990 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.277 00:29:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.277 1+0 records in 00:07:09.277 1+0 records out 00:07:09.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355212 s, 11.5 MB/s 00:07:09.277 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.277 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.277 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.277 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.278 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.278 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.278 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.278 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.539 1+0 records in 00:07:09.539 1+0 records out 00:07:09.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414148 s, 9.9 MB/s 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.539 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.800 1+0 records in 00:07:09.800 1+0 records out 00:07:09.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413218 s, 9.9 MB/s 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.800 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.061 1+0 records in 00:07:10.061 1+0 records out 00:07:10.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971563 s, 4.2 MB/s 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.061 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.322 1+0 records in 00:07:10.322 1+0 records out 00:07:10.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886887 s, 4.6 MB/s 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.322 00:29:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd0", 00:07:10.583 "bdev_name": "Nvme0n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd1", 00:07:10.583 "bdev_name": "Nvme1n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd2", 00:07:10.583 "bdev_name": "Nvme2n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd3", 00:07:10.583 "bdev_name": "Nvme2n2" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd4", 00:07:10.583 "bdev_name": "Nvme2n3" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd5", 00:07:10.583 "bdev_name": "Nvme3n1" 00:07:10.583 } 00:07:10.583 ]' 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd0", 00:07:10.583 "bdev_name": "Nvme0n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd1", 00:07:10.583 "bdev_name": "Nvme1n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd2", 00:07:10.583 "bdev_name": "Nvme2n1" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd3", 00:07:10.583 "bdev_name": "Nvme2n2" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd4", 00:07:10.583 "bdev_name": "Nvme2n3" 00:07:10.583 }, 00:07:10.583 { 00:07:10.583 "nbd_device": "/dev/nbd5", 00:07:10.583 "bdev_name": "Nvme3n1" 00:07:10.583 } 00:07:10.583 ]' 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.583 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.844 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.105 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:11.367 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:11.367 00:29:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.367 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.629 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.890 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.890 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.890 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.890 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.891 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:12.150 /dev/nbd0 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.150 1+0 records in 00:07:12.150 1+0 records out 00:07:12.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425289 s, 9.6 MB/s 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.150 00:29:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:12.408 /dev/nbd1 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.408 1+0 records in 00:07:12.408 1+0 records out 00:07:12.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324681 s, 12.6 MB/s 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.408 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:12.665 /dev/nbd10 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.665 1+0 records in 00:07:12.665 1+0 records out 00:07:12.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495503 s, 8.3 MB/s 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.665 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:12.923 /dev/nbd11 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.923 1+0 records in 00:07:12.923 1+0 records out 00:07:12.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505958 s, 8.1 MB/s 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:12.923 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:13.182 /dev/nbd12 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.182 1+0 records in 00:07:13.182 1+0 records out 00:07:13.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799131 s, 5.1 MB/s 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.182 00:29:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:13.440 /dev/nbd13 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.440 1+0 records in 00:07:13.440 1+0 records out 00:07:13.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527057 s, 7.8 MB/s 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.440 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd0", 00:07:13.698 "bdev_name": "Nvme0n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd1", 00:07:13.698 "bdev_name": "Nvme1n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd10", 00:07:13.698 "bdev_name": "Nvme2n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd11", 00:07:13.698 "bdev_name": "Nvme2n2" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd12", 00:07:13.698 "bdev_name": "Nvme2n3" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd13", 00:07:13.698 "bdev_name": "Nvme3n1" 00:07:13.698 } 00:07:13.698 ]' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd0", 00:07:13.698 "bdev_name": "Nvme0n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd1", 00:07:13.698 "bdev_name": "Nvme1n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd10", 00:07:13.698 "bdev_name": "Nvme2n1" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd11", 00:07:13.698 "bdev_name": "Nvme2n2" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd12", 00:07:13.698 "bdev_name": "Nvme2n3" 00:07:13.698 }, 00:07:13.698 { 00:07:13.698 "nbd_device": "/dev/nbd13", 00:07:13.698 "bdev_name": "Nvme3n1" 00:07:13.698 } 00:07:13.698 ]' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.698 /dev/nbd1 00:07:13.698 /dev/nbd10 00:07:13.698 /dev/nbd11 00:07:13.698 /dev/nbd12 00:07:13.698 /dev/nbd13' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.698 /dev/nbd1 00:07:13.698 /dev/nbd10 00:07:13.698 /dev/nbd11 00:07:13.698 /dev/nbd12 00:07:13.698 /dev/nbd13' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:13.698 256+0 records in 00:07:13.698 256+0 records out 00:07:13.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752306 s, 139 MB/s 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.698 256+0 records in 00:07:13.698 256+0 records out 00:07:13.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0587767 s, 17.8 MB/s 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.698 256+0 records in 00:07:13.698 256+0 records out 00:07:13.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.057756 s, 18.2 MB/s 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.698 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:13.955 256+0 records in 00:07:13.955 256+0 records out 00:07:13.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104269 s, 10.1 MB/s 00:07:13.955 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.955 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:13.955 256+0 records in 00:07:13.955 256+0 records out 00:07:13.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0557643 s, 18.8 MB/s 00:07:13.955 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.955 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:13.955 256+0 records in 00:07:13.955 256+0 records out 00:07:13.956 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0746205 s, 14.1 MB/s 00:07:13.956 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.956 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:14.214 256+0 records in 00:07:14.214 256+0 records out 00:07:14.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0901929 s, 11.6 MB/s 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.214 00:29:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.472 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.730 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.987 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.988 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.246 00:29:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.504 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:15.761 malloc_lvol_verify 00:07:15.761 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:16.020 0719dc29-fb2d-4ebc-ab54-5efe2c9fb882 00:07:16.020 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:16.279 59407385-674e-47ac-9ed4-701e77ccad32 00:07:16.279 00:29:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:16.536 /dev/nbd0 00:07:16.536 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:16.536 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:16.536 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:16.537 mke2fs 1.47.0 (5-Feb-2023) 00:07:16.537 Discarding device blocks: 0/4096 done 00:07:16.537 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:16.537 00:07:16.537 Allocating group tables: 0/1 done 00:07:16.537 Writing inode tables: 0/1 done 00:07:16.537 Creating journal (1024 blocks): done 00:07:16.537 Writing superblocks and filesystem accounting information: 0/1 done 00:07:16.537 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.537 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.797 00:29:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72101 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72101 ']' 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72101 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72101 00:07:16.798 killing process with pid 72101 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72101' 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72101 00:07:16.798 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72101 00:07:17.059 00:29:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:17.059 00:07:17.059 real 0m9.018s 00:07:17.059 user 0m13.254s 00:07:17.059 sys 0m3.007s 00:07:17.059 ************************************ 00:07:17.059 END TEST bdev_nbd 00:07:17.059 ************************************ 00:07:17.059 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.059 00:29:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 00:29:53 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:17.059 00:29:53 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:17.059 skipping fio tests on NVMe due to multi-ns failures. 00:07:17.059 00:29:53 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:17.059 00:29:53 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:17.059 00:29:53 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.059 00:29:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:17.059 00:29:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.059 00:29:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.059 ************************************ 00:07:17.059 START TEST bdev_verify 00:07:17.059 ************************************ 00:07:17.059 00:29:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:17.059 [2024-11-27 00:29:53.734361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:17.059 [2024-11-27 00:29:53.734498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72467 ] 00:07:17.318 [2024-11-27 00:29:53.890214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.318 [2024-11-27 00:29:53.927391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.318 [2024-11-27 00:29:53.927435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.577 Running I/O for 5 seconds... 00:07:19.901 22336.00 IOPS, 87.25 MiB/s [2024-11-27T00:29:57.682Z] 22144.00 IOPS, 86.50 MiB/s [2024-11-27T00:29:58.621Z] 21589.33 IOPS, 84.33 MiB/s [2024-11-27T00:29:59.563Z] 21312.00 IOPS, 83.25 MiB/s [2024-11-27T00:29:59.563Z] 21068.80 IOPS, 82.30 MiB/s 00:07:22.776 Latency(us) 00:07:22.776 [2024-11-27T00:29:59.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.776 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.776 Nvme0n1 : 5.05 1748.89 6.83 0.00 0.00 72960.72 11796.48 76223.41 00:07:22.776 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.776 Nvme0n1 : 5.06 1732.98 6.77 0.00 0.00 73504.72 9326.28 77030.01 00:07:22.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0x0 length 0xa0000 00:07:22.776 Nvme1n1 : 5.05 1748.40 6.83 0.00 0.00 72902.94 14417.92 70577.23 00:07:22.776 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0xa0000 length 0xa0000 00:07:22.776 Nvme1n1 : 5.08 1740.28 6.80 0.00 0.00 73260.44 11695.66 66544.25 00:07:22.776 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0x0 length 0x80000 00:07:22.776 Nvme2n1 : 5.05 1747.29 6.83 0.00 0.00 72641.53 16636.06 62511.26 00:07:22.776 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0x80000 length 0x80000 00:07:22.776 Nvme2n1 : 5.08 1739.76 6.80 0.00 0.00 73093.35 9779.99 59284.87 00:07:22.776 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.776 Verification LBA range: start 0x0 length 0x80000 00:07:22.777 Nvme2n2 : 5.07 1753.65 6.85 0.00 0.00 72252.64 6856.07 64527.75 00:07:22.777 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.777 Verification LBA range: start 0x80000 length 0x80000 00:07:22.777 Nvme2n2 : 5.08 1738.57 6.79 0.00 0.00 72976.69 11594.83 54848.59 00:07:22.777 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.777 Verification LBA range: start 0x0 length 0x80000 00:07:22.777 Nvme2n3 : 5.08 1762.65 6.89 0.00 0.00 71847.71 7410.61 66140.95 00:07:22.777 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.777 Verification LBA range: start 0x80000 length 0x80000 00:07:22.777 Nvme2n3 : 5.08 1738.05 6.79 0.00 0.00 72878.98 11897.30 59688.17 00:07:22.777 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.777 Verification LBA range: start 0x0 length 0x20000 00:07:22.777 Nvme3n1 : 5.09 1761.08 6.88 0.00 0.00 71780.55 10939.47 68560.74 00:07:22.777 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.777 Verification LBA range: start 0x20000 length 0x20000 00:07:22.777 Nvme3n1 : 5.08 1737.49 6.79 0.00 0.00 72765.97 12048.54 61704.66 00:07:22.777 [2024-11-27T00:29:59.564Z] =================================================================================================================== 00:07:22.777 [2024-11-27T00:29:59.564Z] Total : 20949.09 81.83 0.00 0.00 72735.87 6856.07 77030.01 00:07:23.716 00:07:23.716 real 0m6.591s 00:07:23.716 user 0m12.023s 00:07:23.716 sys 0m0.241s 00:07:23.716 00:30:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.716 ************************************ 00:07:23.716 END TEST bdev_verify 00:07:23.716 ************************************ 00:07:23.716 00:30:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.716 00:30:00 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.716 00:30:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.716 00:30:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.716 00:30:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.716 ************************************ 00:07:23.716 START TEST bdev_verify_big_io 00:07:23.716 ************************************ 00:07:23.716 00:30:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.716 [2024-11-27 00:30:00.402781] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:23.716 [2024-11-27 00:30:00.402958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72559 ] 00:07:23.976 [2024-11-27 00:30:00.568447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.976 [2024-11-27 00:30:00.611734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.976 [2024-11-27 00:30:00.611792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.547 Running I/O for 5 seconds... 00:07:29.062 1070.00 IOPS, 66.88 MiB/s [2024-11-27T00:30:07.225Z] 2392.00 IOPS, 149.50 MiB/s [2024-11-27T00:30:07.225Z] 2898.00 IOPS, 181.12 MiB/s 00:07:30.438 Latency(us) 00:07:30.438 [2024-11-27T00:30:07.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.438 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0xbd0b 00:07:30.438 Nvme0n1 : 5.58 124.78 7.80 0.00 0.00 983487.92 34482.02 1438968.91 00:07:30.438 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:30.438 Nvme0n1 : 5.68 131.07 8.19 0.00 0.00 937160.40 22887.19 967916.31 00:07:30.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0xa000 00:07:30.438 Nvme1n1 : 5.58 127.07 7.94 0.00 0.00 942030.97 51622.20 1258291.20 00:07:30.438 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0xa000 length 0xa000 00:07:30.438 Nvme1n1 : 5.68 135.24 8.45 0.00 0.00 894316.70 63721.16 813049.70 00:07:30.438 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0x8000 00:07:30.438 Nvme2n1 : 5.68 131.71 8.23 0.00 0.00 881053.74 71383.83 1490591.11 00:07:30.438 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x8000 length 0x8000 00:07:30.438 Nvme2n1 : 5.68 135.10 8.44 0.00 0.00 867880.43 104051.00 819502.47 00:07:30.438 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0x8000 00:07:30.438 Nvme2n2 : 5.81 143.95 9.00 0.00 0.00 783151.82 47992.52 1116330.14 00:07:30.438 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x8000 length 0x8000 00:07:30.438 Nvme2n2 : 5.78 136.45 8.53 0.00 0.00 829043.88 95178.44 816276.09 00:07:30.438 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0x8000 00:07:30.438 Nvme2n3 : 5.84 150.47 9.40 0.00 0.00 730531.06 11241.94 1580929.97 00:07:30.438 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x8000 length 0x8000 00:07:30.438 Nvme2n3 : 5.84 149.55 9.35 0.00 0.00 745895.42 25004.50 832408.02 00:07:30.438 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x0 length 0x2000 00:07:30.438 Nvme3n1 : 5.90 203.23 12.70 0.00 0.00 526421.23 346.58 1180857.90 00:07:30.438 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.438 Verification LBA range: start 0x2000 length 0x2000 00:07:30.438 Nvme3n1 : 5.84 157.17 9.82 0.00 0.00 689334.11 598.65 858219.13 00:07:30.438 [2024-11-27T00:30:07.225Z] =================================================================================================================== 00:07:30.438 [2024-11-27T00:30:07.225Z] Total : 1725.80 107.86 0.00 0.00 798619.94 346.58 1580929.97 00:07:31.379 00:07:31.379 real 0m7.706s 00:07:31.379 user 0m14.524s 00:07:31.379 sys 0m0.322s 00:07:31.379 00:30:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.379 ************************************ 00:07:31.379 END TEST bdev_verify_big_io 00:07:31.379 ************************************ 00:07:31.379 00:30:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 00:30:08 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.379 00:30:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:31.379 00:30:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.379 00:30:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 ************************************ 00:07:31.379 START TEST bdev_write_zeroes 00:07:31.379 ************************************ 00:07:31.379 00:30:08 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.379 [2024-11-27 00:30:08.147626] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:31.379 [2024-11-27 00:30:08.147727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72659 ] 00:07:31.636 [2024-11-27 00:30:08.296607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.636 [2024-11-27 00:30:08.319122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.207 Running I/O for 1 seconds... 00:07:33.144 61376.00 IOPS, 239.75 MiB/s 00:07:33.144 Latency(us) 00:07:33.144 [2024-11-27T00:30:09.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.144 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme0n1 : 1.02 10216.00 39.91 0.00 0.00 12504.68 6024.27 20669.05 00:07:33.144 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme1n1 : 1.02 10204.38 39.86 0.00 0.00 12501.29 9124.63 20265.75 00:07:33.144 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme2n1 : 1.02 10192.89 39.82 0.00 0.00 12464.78 7158.55 19660.80 00:07:33.144 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme2n2 : 1.02 10181.33 39.77 0.00 0.00 12460.08 8469.27 19257.50 00:07:33.144 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme2n3 : 1.03 10169.90 39.73 0.00 0.00 12441.39 7158.55 19358.33 00:07:33.144 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.144 Nvme3n1 : 1.03 10096.16 39.44 0.00 0.00 12516.48 8267.62 21778.12 00:07:33.144 [2024-11-27T00:30:09.931Z] =================================================================================================================== 00:07:33.144 [2024-11-27T00:30:09.931Z] Total : 61060.66 238.52 0.00 0.00 12481.42 6024.27 21778.12 00:07:33.405 00:07:33.405 real 0m1.860s 00:07:33.405 user 0m1.572s 00:07:33.405 sys 0m0.176s 00:07:33.405 00:30:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.405 00:30:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:33.405 ************************************ 00:07:33.405 END TEST bdev_write_zeroes 00:07:33.405 ************************************ 00:07:33.405 00:30:09 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.405 00:30:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.405 00:30:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.405 00:30:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.405 ************************************ 00:07:33.405 START TEST bdev_json_nonenclosed 00:07:33.405 ************************************ 00:07:33.405 00:30:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.405 [2024-11-27 00:30:10.063571] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:33.405 [2024-11-27 00:30:10.063708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72701 ] 00:07:33.667 [2024-11-27 00:30:10.227877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.667 [2024-11-27 00:30:10.252834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.667 [2024-11-27 00:30:10.252938] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:33.667 [2024-11-27 00:30:10.252954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.667 [2024-11-27 00:30:10.252966] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.667 00:07:33.667 real 0m0.329s 00:07:33.667 user 0m0.122s 00:07:33.667 sys 0m0.103s 00:07:33.667 00:30:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.667 ************************************ 00:07:33.667 END TEST bdev_json_nonenclosed 00:07:33.667 ************************************ 00:07:33.667 00:30:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:33.667 00:30:10 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.667 00:30:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.667 00:30:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.667 00:30:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.667 ************************************ 00:07:33.667 START TEST bdev_json_nonarray 00:07:33.667 ************************************ 00:07:33.667 00:30:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.667 [2024-11-27 00:30:10.428785] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:33.667 [2024-11-27 00:30:10.428927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72721 ] 00:07:33.927 [2024-11-27 00:30:10.583462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.927 [2024-11-27 00:30:10.608374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.927 [2024-11-27 00:30:10.608478] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:33.927 [2024-11-27 00:30:10.608497] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.927 [2024-11-27 00:30:10.608510] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.927 00:07:33.927 real 0m0.314s 00:07:33.927 user 0m0.130s 00:07:33.927 sys 0m0.082s 00:07:33.927 00:30:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.927 ************************************ 00:07:33.927 00:30:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:33.927 END TEST bdev_json_nonarray 00:07:33.927 ************************************ 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:34.188 00:30:10 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:34.188 00:07:34.188 real 0m30.380s 00:07:34.188 user 0m47.747s 00:07:34.188 sys 0m5.136s 00:07:34.188 00:30:10 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.188 ************************************ 00:07:34.188 END TEST blockdev_nvme 00:07:34.188 00:30:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.188 ************************************ 00:07:34.188 00:30:10 -- spdk/autotest.sh@209 -- # uname -s 00:07:34.188 00:30:10 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:34.188 00:30:10 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:34.188 00:30:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.188 00:30:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.188 00:30:10 -- common/autotest_common.sh@10 -- # set +x 00:07:34.188 ************************************ 00:07:34.188 START TEST blockdev_nvme_gpt 00:07:34.188 ************************************ 00:07:34.188 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:34.188 * Looking for test storage... 00:07:34.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:34.188 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.188 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.188 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.188 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.188 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.189 00:30:10 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.189 --rc genhtml_branch_coverage=1 00:07:34.189 --rc genhtml_function_coverage=1 00:07:34.189 --rc genhtml_legend=1 00:07:34.189 --rc geninfo_all_blocks=1 00:07:34.189 --rc geninfo_unexecuted_blocks=1 00:07:34.189 00:07:34.189 ' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.189 --rc genhtml_branch_coverage=1 00:07:34.189 --rc genhtml_function_coverage=1 00:07:34.189 --rc genhtml_legend=1 00:07:34.189 --rc geninfo_all_blocks=1 00:07:34.189 --rc geninfo_unexecuted_blocks=1 00:07:34.189 00:07:34.189 ' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.189 --rc genhtml_branch_coverage=1 00:07:34.189 --rc genhtml_function_coverage=1 00:07:34.189 --rc genhtml_legend=1 00:07:34.189 --rc geninfo_all_blocks=1 00:07:34.189 --rc geninfo_unexecuted_blocks=1 00:07:34.189 00:07:34.189 ' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.189 --rc genhtml_branch_coverage=1 00:07:34.189 --rc genhtml_function_coverage=1 00:07:34.189 --rc genhtml_legend=1 00:07:34.189 --rc geninfo_all_blocks=1 00:07:34.189 --rc geninfo_unexecuted_blocks=1 00:07:34.189 00:07:34.189 ' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72799 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72799 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72799 ']' 00:07:34.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.189 00:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.449 [2024-11-27 00:30:11.030792] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:34.449 [2024-11-27 00:30:11.030969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72799 ] 00:07:34.449 [2024-11-27 00:30:11.194702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.710 [2024-11-27 00:30:11.236972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.282 00:30:11 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.282 00:30:11 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:35.282 00:30:11 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:35.282 00:30:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:35.282 00:30:11 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:35.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.804 Waiting for block devices as requested 00:07:35.804 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.804 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.804 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.064 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.344 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:41.344 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.344 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.345 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:41.345 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:41.345 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:41.345 BYT; 00:07:41.345 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:41.345 BYT; 00:07:41.345 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:41.345 00:30:17 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:41.345 00:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:42.278 The operation has completed successfully. 00:07:42.278 00:30:18 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:43.305 The operation has completed successfully. 00:07:43.305 00:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:43.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:44.128 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.128 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.128 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.128 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.128 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:44.128 00:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.128 00:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.128 [] 00:07:44.129 00:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.129 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:44.129 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:44.129 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:44.129 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:44.387 00:30:20 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:44.387 00:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.387 00:30:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:44.647 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.647 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fa6d8d6b-aecd-47fe-aa52-b866165ab2a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa6d8d6b-aecd-47fe-aa52-b866165ab2a4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "51a40f5c-5490-45a0-bee5-790b9032f529"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "51a40f5c-5490-45a0-bee5-790b9032f529",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f1c7c73f-c7cf-4552-9da9-030746490cd3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f1c7c73f-c7cf-4552-9da9-030746490cd3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6c161e54-e12a-4ca3-8593-7b53c5107c37"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c161e54-e12a-4ca3-8593-7b53c5107c37",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c0d1092b-70ec-4423-9951-fc4f465fc893"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c0d1092b-70ec-4423-9951-fc4f465fc893",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:44.648 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72799 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72799 ']' 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72799 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72799 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.648 killing process with pid 72799 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72799' 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72799 00:07:44.648 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72799 00:07:44.907 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:44.907 00:30:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:44.907 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:44.907 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.907 00:30:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.907 ************************************ 00:07:44.907 START TEST bdev_hello_world 00:07:44.907 ************************************ 00:07:44.907 00:30:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:45.165 [2024-11-27 00:30:21.740695] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:45.165 [2024-11-27 00:30:21.740826] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:07:45.165 [2024-11-27 00:30:21.895353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.165 [2024-11-27 00:30:21.918121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.732 [2024-11-27 00:30:22.296344] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:45.732 [2024-11-27 00:30:22.296392] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:45.732 [2024-11-27 00:30:22.296410] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:45.732 [2024-11-27 00:30:22.298847] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:45.732 [2024-11-27 00:30:22.299353] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:45.732 [2024-11-27 00:30:22.299387] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:45.732 [2024-11-27 00:30:22.299679] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:45.732 00:07:45.732 [2024-11-27 00:30:22.299710] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:45.732 00:07:45.732 real 0m0.788s 00:07:45.732 user 0m0.512s 00:07:45.732 sys 0m0.172s 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:45.732 ************************************ 00:07:45.732 END TEST bdev_hello_world 00:07:45.732 ************************************ 00:07:45.732 00:30:22 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:45.732 00:30:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.732 00:30:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.732 00:30:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.732 ************************************ 00:07:45.732 START TEST bdev_bounds 00:07:45.732 ************************************ 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73442 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:45.732 Process bdevio pid: 73442 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73442' 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73442 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73442 ']' 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.732 00:30:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:45.991 [2024-11-27 00:30:22.570256] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:45.991 [2024-11-27 00:30:22.570388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73442 ] 00:07:45.991 [2024-11-27 00:30:22.721781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.991 [2024-11-27 00:30:22.748720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.991 [2024-11-27 00:30:22.749022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.991 [2024-11-27 00:30:22.749061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.926 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.926 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:46.926 00:30:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:46.926 I/O targets: 00:07:46.926 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:46.926 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:46.926 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:46.926 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:46.926 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:46.926 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:46.926 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:46.926 00:07:46.926 00:07:46.926 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.926 http://cunit.sourceforge.net/ 00:07:46.926 00:07:46.926 00:07:46.926 Suite: bdevio tests on: Nvme3n1 00:07:46.926 Test: blockdev write read block ...passed 00:07:46.926 Test: blockdev write zeroes read block ...passed 00:07:46.926 Test: blockdev write zeroes read no split ...passed 00:07:46.926 Test: blockdev write zeroes read split ...passed 00:07:46.926 Test: blockdev write zeroes read split partial ...passed 00:07:46.926 Test: blockdev reset ...[2024-11-27 00:30:23.514622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:46.926 [2024-11-27 00:30:23.516263] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:46.926 passed 00:07:46.926 Test: blockdev write read 8 blocks ...passed 00:07:46.926 Test: blockdev write read size > 128k ...passed 00:07:46.926 Test: blockdev write read invalid size ...passed 00:07:46.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.926 Test: blockdev write read max offset ...passed 00:07:46.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.926 Test: blockdev writev readv 8 blocks ...passed 00:07:46.926 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.926 Test: blockdev writev readv block ...passed 00:07:46.926 Test: blockdev writev readv size > 128k ...passed 00:07:46.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.926 Test: blockdev comparev and writev ...passed 00:07:46.926 Test: blockdev nvme passthru rw ...[2024-11-27 00:30:23.521846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c260e000 len:0x1000 00:07:46.926 [2024-11-27 00:30:23.521902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.926 passed 00:07:46.926 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.926 Test: blockdev nvme admin passthru ...[2024-11-27 00:30:23.522463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:46.926 [2024-11-27 00:30:23.522486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:46.926 passed 00:07:46.926 Test: blockdev copy ...passed 00:07:46.926 Suite: bdevio tests on: Nvme2n3 00:07:46.926 Test: blockdev write read block ...passed 00:07:46.926 Test: blockdev write zeroes read block ...passed 00:07:46.926 Test: blockdev write zeroes read no split ...passed 00:07:46.926 Test: blockdev write zeroes read split ...passed 00:07:46.926 Test: blockdev write zeroes read split partial ...passed 00:07:46.926 Test: blockdev reset ...[2024-11-27 00:30:23.538055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:46.926 passed 00:07:46.926 Test: blockdev write read 8 blocks ...[2024-11-27 00:30:23.540013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:46.926 passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.927 Test: blockdev write read max offset ...passed 00:07:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.927 Test: blockdev writev readv 8 blocks ...passed 00:07:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.927 Test: blockdev writev readv block ...passed 00:07:46.927 Test: blockdev writev readv size > 128k ...passed 00:07:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.927 Test: blockdev comparev and writev ...[2024-11-27 00:30:23.545106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2606000 len:0x1000 00:07:46.927 [2024-11-27 00:30:23.545141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme passthru rw ...passed 00:07:46.927 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.927 Test: blockdev nvme admin passthru ...[2024-11-27 00:30:23.545869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:46.927 [2024-11-27 00:30:23.545893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev copy ...passed 00:07:46.927 Suite: bdevio tests on: Nvme2n2 00:07:46.927 Test: blockdev write read block ...passed 00:07:46.927 Test: blockdev write zeroes read block ...passed 00:07:46.927 Test: blockdev write zeroes read no split ...passed 00:07:46.927 Test: blockdev write zeroes read split ...passed 00:07:46.927 Test: blockdev write zeroes read split partial ...passed 00:07:46.927 Test: blockdev reset ...[2024-11-27 00:30:23.560060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:46.927 passed 00:07:46.927 Test: blockdev write read 8 blocks ...[2024-11-27 00:30:23.561819] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:46.927 passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.927 Test: blockdev write read max offset ...passed 00:07:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.927 Test: blockdev writev readv 8 blocks ...passed 00:07:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.927 Test: blockdev writev readv block ...passed 00:07:46.927 Test: blockdev writev readv size > 128k ...passed 00:07:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.927 Test: blockdev comparev and writev ...passed 00:07:46.927 Test: blockdev nvme passthru rw ...[2024-11-27 00:30:23.566959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2608000 len:0x1000 00:07:46.927 [2024-11-27 00:30:23.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.927 Test: blockdev nvme admin passthru ...[2024-11-27 00:30:23.567686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:46.927 [2024-11-27 00:30:23.567708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev copy ...passed 00:07:46.927 Suite: bdevio tests on: Nvme2n1 00:07:46.927 Test: blockdev write read block ...passed 00:07:46.927 Test: blockdev write zeroes read block ...passed 00:07:46.927 Test: blockdev write zeroes read no split ...passed 00:07:46.927 Test: blockdev write zeroes read split ...passed 00:07:46.927 Test: blockdev write zeroes read split partial ...passed 00:07:46.927 Test: blockdev reset ...[2024-11-27 00:30:23.581283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:46.927 [2024-11-27 00:30:23.583006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:46.927 passed 00:07:46.927 Test: blockdev write read 8 blocks ...passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.927 Test: blockdev write read max offset ...passed 00:07:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.927 Test: blockdev writev readv 8 blocks ...passed 00:07:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.927 Test: blockdev writev readv block ...passed 00:07:46.927 Test: blockdev writev readv size > 128k ...passed 00:07:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.927 Test: blockdev comparev and writev ...passed 00:07:46.927 Test: blockdev nvme passthru rw ...[2024-11-27 00:30:23.588233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2e3d000 len:0x1000 00:07:46.927 [2024-11-27 00:30:23.588270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme passthru vendor specific ...[2024-11-27 00:30:23.588725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:46.927 [2024-11-27 00:30:23.588747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme admin passthru ...passed 00:07:46.927 Test: blockdev copy ...passed 00:07:46.927 Suite: bdevio tests on: Nvme1n1p2 00:07:46.927 Test: blockdev write read block ...passed 00:07:46.927 Test: blockdev write zeroes read block ...passed 00:07:46.927 Test: blockdev write zeroes read no split ...passed 00:07:46.927 Test: blockdev write zeroes read split ...passed 00:07:46.927 Test: blockdev write zeroes read split partial ...passed 00:07:46.927 Test: blockdev reset ...[2024-11-27 00:30:23.608445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:46.927 [2024-11-27 00:30:23.610013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:46.927 passed 00:07:46.927 Test: blockdev write read 8 blocks ...passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.927 Test: blockdev write read max offset ...passed 00:07:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.927 Test: blockdev writev readv 8 blocks ...passed 00:07:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.927 Test: blockdev writev readv block ...passed 00:07:46.927 Test: blockdev writev readv size > 128k ...passed 00:07:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.927 Test: blockdev comparev and writev ...[2024-11-27 00:30:23.616623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e2e39000 len:0x1000 00:07:46.927 [2024-11-27 00:30:23.616654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme passthru rw ...passed 00:07:46.927 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.927 Test: blockdev nvme admin passthru ...passed 00:07:46.927 Test: blockdev copy ...passed 00:07:46.927 Suite: bdevio tests on: Nvme1n1p1 00:07:46.927 Test: blockdev write read block ...passed 00:07:46.927 Test: blockdev write zeroes read block ...passed 00:07:46.927 Test: blockdev write zeroes read no split ...passed 00:07:46.927 Test: blockdev write zeroes read split ...passed 00:07:46.927 Test: blockdev write zeroes read split partial ...passed 00:07:46.927 Test: blockdev reset ...[2024-11-27 00:30:23.632783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:46.927 [2024-11-27 00:30:23.634166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:46.927 passed 00:07:46.927 Test: blockdev write read 8 blocks ...passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.927 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.927 Test: blockdev write read max offset ...passed 00:07:46.927 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.927 Test: blockdev writev readv 8 blocks ...passed 00:07:46.927 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.927 Test: blockdev writev readv block ...passed 00:07:46.927 Test: blockdev writev readv size > 128k ...passed 00:07:46.927 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.927 Test: blockdev comparev and writev ...[2024-11-27 00:30:23.639893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e2e35000 len:0x1000 00:07:46.927 [2024-11-27 00:30:23.639924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:46.927 passed 00:07:46.927 Test: blockdev nvme passthru rw ...passed 00:07:46.927 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.927 Test: blockdev nvme admin passthru ...passed 00:07:46.927 Test: blockdev copy ...passed 00:07:46.927 Suite: bdevio tests on: Nvme0n1 00:07:46.927 Test: blockdev write read block ...passed 00:07:46.927 Test: blockdev write zeroes read block ...passed 00:07:46.927 Test: blockdev write zeroes read no split ...passed 00:07:46.927 Test: blockdev write zeroes read split ...passed 00:07:46.927 Test: blockdev write zeroes read split partial ...passed 00:07:46.927 Test: blockdev reset ...[2024-11-27 00:30:23.651880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:46.927 [2024-11-27 00:30:23.653358] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:46.927 passed 00:07:46.927 Test: blockdev write read 8 blocks ...passed 00:07:46.927 Test: blockdev write read size > 128k ...passed 00:07:46.927 Test: blockdev write read invalid size ...passed 00:07:46.927 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:46.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:46.928 Test: blockdev write read max offset ...passed 00:07:46.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:46.928 Test: blockdev writev readv 8 blocks ...passed 00:07:46.928 Test: blockdev writev readv 30 x 1block ...passed 00:07:46.928 Test: blockdev writev readv block ...passed 00:07:46.928 Test: blockdev writev readv size > 128k ...passed 00:07:46.928 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:46.928 Test: blockdev comparev and writev ...[2024-11-27 00:30:23.657516] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:46.928 separate metadata which is not supported yet. 00:07:46.928 passed 00:07:46.928 Test: blockdev nvme passthru rw ...passed 00:07:46.928 Test: blockdev nvme passthru vendor specific ...passed 00:07:46.928 Test: blockdev nvme admin passthru ...[2024-11-27 00:30:23.657981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:46.928 [2024-11-27 00:30:23.658010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:46.928 passed 00:07:46.928 Test: blockdev copy ...passed 00:07:46.928 00:07:46.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.928 suites 7 7 n/a 0 0 00:07:46.928 tests 161 161 161 0 0 00:07:46.928 asserts 1025 1025 1025 0 n/a 00:07:46.928 00:07:46.928 Elapsed time = 0.376 seconds 00:07:46.928 0 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73442 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73442 ']' 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73442 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73442 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.928 killing process with pid 73442 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73442' 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73442 00:07:46.928 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73442 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:47.187 00:07:47.187 real 0m1.360s 00:07:47.187 user 0m3.480s 00:07:47.187 sys 0m0.268s 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:47.187 ************************************ 00:07:47.187 END TEST bdev_bounds 00:07:47.187 ************************************ 00:07:47.187 00:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:47.187 00:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.187 00:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.187 00:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.187 ************************************ 00:07:47.187 START TEST bdev_nbd 00:07:47.187 ************************************ 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73485 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73485 /var/tmp/spdk-nbd.sock 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73485 ']' 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:47.187 00:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:47.446 [2024-11-27 00:30:23.985962] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:47.446 [2024-11-27 00:30:23.986087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.446 [2024-11-27 00:30:24.139960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.446 [2024-11-27 00:30:24.164270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.380 00:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.381 00:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.381 1+0 records in 00:07:48.381 1+0 records out 00:07:48.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336885 s, 12.2 MB/s 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.381 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.639 1+0 records in 00:07:48.639 1+0 records out 00:07:48.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432307 s, 9.5 MB/s 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.639 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.897 1+0 records in 00:07:48.897 1+0 records out 00:07:48.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340578 s, 12.0 MB/s 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:48.897 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.156 1+0 records in 00:07:49.156 1+0 records out 00:07:49.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387152 s, 10.6 MB/s 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:49.156 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.416 1+0 records in 00:07:49.416 1+0 records out 00:07:49.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349644 s, 11.7 MB/s 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.416 00:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.416 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.416 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.416 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:49.416 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:49.416 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.676 1+0 records in 00:07:49.676 1+0 records out 00:07:49.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628137 s, 6.5 MB/s 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:49.676 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.935 1+0 records in 00:07:49.935 1+0 records out 00:07:49.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522788 s, 7.8 MB/s 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd0", 00:07:49.935 "bdev_name": "Nvme0n1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd1", 00:07:49.935 "bdev_name": "Nvme1n1p1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd2", 00:07:49.935 "bdev_name": "Nvme1n1p2" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd3", 00:07:49.935 "bdev_name": "Nvme2n1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd4", 00:07:49.935 "bdev_name": "Nvme2n2" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd5", 00:07:49.935 "bdev_name": "Nvme2n3" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd6", 00:07:49.935 "bdev_name": "Nvme3n1" 00:07:49.935 } 00:07:49.935 ]' 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd0", 00:07:49.935 "bdev_name": "Nvme0n1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd1", 00:07:49.935 "bdev_name": "Nvme1n1p1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd2", 00:07:49.935 "bdev_name": "Nvme1n1p2" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd3", 00:07:49.935 "bdev_name": "Nvme2n1" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd4", 00:07:49.935 "bdev_name": "Nvme2n2" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd5", 00:07:49.935 "bdev_name": "Nvme2n3" 00:07:49.935 }, 00:07:49.935 { 00:07:49.935 "nbd_device": "/dev/nbd6", 00:07:49.935 "bdev_name": "Nvme3n1" 00:07:49.935 } 00:07:49.935 ]' 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.935 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.191 00:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.449 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.707 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.965 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.224 00:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.485 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:51.743 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:52.001 /dev/nbd0 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.002 1+0 records in 00:07:52.002 1+0 records out 00:07:52.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470902 s, 8.7 MB/s 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:52.002 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:52.259 /dev/nbd1 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.259 1+0 records in 00:07:52.259 1+0 records out 00:07:52.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431471 s, 9.5 MB/s 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:52.259 00:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:52.516 /dev/nbd10 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.516 1+0 records in 00:07:52.516 1+0 records out 00:07:52.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322418 s, 12.7 MB/s 00:07:52.516 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:52.517 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:52.774 /dev/nbd11 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.774 1+0 records in 00:07:52.774 1+0 records out 00:07:52.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365457 s, 11.2 MB/s 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:52.774 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:53.032 /dev/nbd12 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.032 1+0 records in 00:07:53.032 1+0 records out 00:07:53.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467844 s, 8.8 MB/s 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:53.032 /dev/nbd13 00:07:53.032 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.289 1+0 records in 00:07:53.289 1+0 records out 00:07:53.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042297 s, 9.7 MB/s 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:53.289 00:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:53.289 /dev/nbd14 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.289 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.289 1+0 records in 00:07:53.289 1+0 records out 00:07:53.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494269 s, 8.3 MB/s 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.290 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd0", 00:07:53.547 "bdev_name": "Nvme0n1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd1", 00:07:53.547 "bdev_name": "Nvme1n1p1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd10", 00:07:53.547 "bdev_name": "Nvme1n1p2" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd11", 00:07:53.547 "bdev_name": "Nvme2n1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd12", 00:07:53.547 "bdev_name": "Nvme2n2" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd13", 00:07:53.547 "bdev_name": "Nvme2n3" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd14", 00:07:53.547 "bdev_name": "Nvme3n1" 00:07:53.547 } 00:07:53.547 ]' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd0", 00:07:53.547 "bdev_name": "Nvme0n1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd1", 00:07:53.547 "bdev_name": "Nvme1n1p1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd10", 00:07:53.547 "bdev_name": "Nvme1n1p2" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd11", 00:07:53.547 "bdev_name": "Nvme2n1" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd12", 00:07:53.547 "bdev_name": "Nvme2n2" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd13", 00:07:53.547 "bdev_name": "Nvme2n3" 00:07:53.547 }, 00:07:53.547 { 00:07:53.547 "nbd_device": "/dev/nbd14", 00:07:53.547 "bdev_name": "Nvme3n1" 00:07:53.547 } 00:07:53.547 ]' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:53.547 /dev/nbd1 00:07:53.547 /dev/nbd10 00:07:53.547 /dev/nbd11 00:07:53.547 /dev/nbd12 00:07:53.547 /dev/nbd13 00:07:53.547 /dev/nbd14' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:53.547 /dev/nbd1 00:07:53.547 /dev/nbd10 00:07:53.547 /dev/nbd11 00:07:53.547 /dev/nbd12 00:07:53.547 /dev/nbd13 00:07:53.547 /dev/nbd14' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:53.547 256+0 records in 00:07:53.547 256+0 records out 00:07:53.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773227 s, 136 MB/s 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.547 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:53.805 256+0 records in 00:07:53.805 256+0 records out 00:07:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0631357 s, 16.6 MB/s 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:53.805 256+0 records in 00:07:53.805 256+0 records out 00:07:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0633346 s, 16.6 MB/s 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:53.805 256+0 records in 00:07:53.805 256+0 records out 00:07:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638852 s, 16.4 MB/s 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:53.805 256+0 records in 00:07:53.805 256+0 records out 00:07:53.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600493 s, 17.5 MB/s 00:07:53.805 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:54.063 256+0 records in 00:07:54.063 256+0 records out 00:07:54.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.057696 s, 18.2 MB/s 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:54.063 256+0 records in 00:07:54.063 256+0 records out 00:07:54.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0588231 s, 17.8 MB/s 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:54.063 256+0 records in 00:07:54.063 256+0 records out 00:07:54.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0588811 s, 17.8 MB/s 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.063 00:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.321 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.576 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.833 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.091 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.350 00:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.350 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.610 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:55.872 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:56.133 malloc_lvol_verify 00:07:56.133 00:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:56.392 7afa02ce-5e14-4b5b-a04e-7267c4109bed 00:07:56.392 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:56.653 06d67d17-1aff-4b43-b6ec-5c1d8499c25a 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:56.653 /dev/nbd0 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:56.653 mke2fs 1.47.0 (5-Feb-2023) 00:07:56.653 Discarding device blocks: 0/4096 done 00:07:56.653 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:56.653 00:07:56.653 Allocating group tables: 0/1 done 00:07:56.653 Writing inode tables: 0/1 done 00:07:56.653 Creating journal (1024 blocks): done 00:07:56.653 Writing superblocks and filesystem accounting information: 0/1 done 00:07:56.653 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.653 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73485 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73485 ']' 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73485 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73485 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.913 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.914 killing process with pid 73485 00:07:56.914 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73485' 00:07:56.914 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73485 00:07:56.914 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73485 00:07:57.178 00:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:57.178 00:07:57.178 real 0m9.931s 00:07:57.178 user 0m14.442s 00:07:57.178 sys 0m3.531s 00:07:57.178 ************************************ 00:07:57.178 END TEST bdev_nbd 00:07:57.178 ************************************ 00:07:57.178 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.178 00:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:57.178 skipping fio tests on NVMe due to multi-ns failures. 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:57.178 00:30:33 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:57.178 00:30:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:57.178 00:30:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.178 00:30:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.178 ************************************ 00:07:57.178 START TEST bdev_verify 00:07:57.178 ************************************ 00:07:57.178 00:30:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:57.440 [2024-11-27 00:30:33.979736] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:07:57.440 [2024-11-27 00:30:33.979866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73889 ] 00:07:57.440 [2024-11-27 00:30:34.139695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.440 [2024-11-27 00:30:34.165494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.440 [2024-11-27 00:30:34.165593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.012 Running I/O for 5 seconds... 00:08:00.335 20608.00 IOPS, 80.50 MiB/s [2024-11-27T00:30:38.061Z] 21504.00 IOPS, 84.00 MiB/s [2024-11-27T00:30:39.002Z] 21482.67 IOPS, 83.92 MiB/s [2024-11-27T00:30:39.941Z] 21728.00 IOPS, 84.88 MiB/s [2024-11-27T00:30:39.941Z] 21222.40 IOPS, 82.90 MiB/s 00:08:03.154 Latency(us) 00:08:03.154 [2024-11-27T00:30:39.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.154 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.154 Verification LBA range: start 0x0 length 0xbd0bd 00:08:03.154 Nvme0n1 : 5.09 1507.93 5.89 0.00 0.00 84696.19 18249.26 80256.39 00:08:03.154 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.154 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:03.154 Nvme0n1 : 5.09 1484.20 5.80 0.00 0.00 86022.60 18350.08 85902.57 00:08:03.154 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x4ff80 00:08:03.155 Nvme1n1p1 : 5.10 1507.26 5.89 0.00 0.00 84613.84 18955.03 78239.90 00:08:03.155 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:03.155 Nvme1n1p1 : 5.09 1483.12 5.79 0.00 0.00 85802.26 20164.92 81869.59 00:08:03.155 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x4ff7f 00:08:03.155 Nvme1n1p2 : 5.10 1506.02 5.88 0.00 0.00 84516.60 19055.85 75013.51 00:08:03.155 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:03.155 Nvme1n1p2 : 5.09 1482.62 5.79 0.00 0.00 85626.11 21576.47 78239.90 00:08:03.155 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x80000 00:08:03.155 Nvme2n1 : 5.10 1505.05 5.88 0.00 0.00 84407.43 19862.45 70577.23 00:08:03.155 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x80000 length 0x80000 00:08:03.155 Nvme2n1 : 5.10 1481.90 5.79 0.00 0.00 85475.90 23189.66 74206.92 00:08:03.155 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x80000 00:08:03.155 Nvme2n2 : 5.11 1504.12 5.88 0.00 0.00 84290.95 21778.12 74206.92 00:08:03.155 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x80000 length 0x80000 00:08:03.155 Nvme2n2 : 5.10 1481.26 5.79 0.00 0.00 85322.93 18955.03 74206.92 00:08:03.155 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x80000 00:08:03.155 Nvme2n3 : 5.11 1503.73 5.87 0.00 0.00 84156.13 20769.87 77836.60 00:08:03.155 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x80000 length 0x80000 00:08:03.155 Nvme2n3 : 5.10 1480.84 5.78 0.00 0.00 85215.12 16636.06 79853.10 00:08:03.155 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x0 length 0x20000 00:08:03.155 Nvme3n1 : 5.11 1503.32 5.87 0.00 0.00 84010.15 15627.82 81062.99 00:08:03.155 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:03.155 Verification LBA range: start 0x20000 length 0x20000 00:08:03.155 Nvme3n1 : 5.10 1479.88 5.78 0.00 0.00 85148.69 12401.43 84289.38 00:08:03.155 [2024-11-27T00:30:39.942Z] =================================================================================================================== 00:08:03.155 [2024-11-27T00:30:39.942Z] Total : 20911.24 81.68 0.00 0.00 84945.59 12401.43 85902.57 00:08:03.415 00:08:03.415 real 0m6.131s 00:08:03.415 user 0m11.529s 00:08:03.415 sys 0m0.223s 00:08:03.415 00:30:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.415 ************************************ 00:08:03.415 END TEST bdev_verify 00:08:03.415 ************************************ 00:08:03.415 00:30:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:03.415 00:30:40 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:03.415 00:30:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:03.415 00:30:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.415 00:30:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.415 ************************************ 00:08:03.415 START TEST bdev_verify_big_io 00:08:03.415 ************************************ 00:08:03.415 00:30:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:03.415 [2024-11-27 00:30:40.152234] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:03.415 [2024-11-27 00:30:40.152331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 00:08:03.674 [2024-11-27 00:30:40.307658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.674 [2024-11-27 00:30:40.333530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.674 [2024-11-27 00:30:40.333567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.241 Running I/O for 5 seconds... 00:08:08.153 528.00 IOPS, 33.00 MiB/s [2024-11-27T00:30:46.838Z] 1266.50 IOPS, 79.16 MiB/s [2024-11-27T00:30:47.456Z] 1594.00 IOPS, 99.63 MiB/s [2024-11-27T00:30:47.456Z] 2332.00 IOPS, 145.75 MiB/s 00:08:10.669 Latency(us) 00:08:10.669 [2024-11-27T00:30:47.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.669 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0xbd0b 00:08:10.669 Nvme0n1 : 5.57 103.40 6.46 0.00 0.00 1179643.45 20669.05 1516402.22 00:08:10.669 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:10.669 Nvme0n1 : 5.89 97.84 6.12 0.00 0.00 1250828.03 12048.54 1529307.77 00:08:10.669 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x4ff8 00:08:10.669 Nvme1n1p1 : 5.89 92.34 5.77 0.00 0.00 1256877.80 99614.72 1619646.62 00:08:10.669 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:10.669 Nvme1n1p1 : 6.22 90.73 5.67 0.00 0.00 1274524.68 109697.18 2077793.67 00:08:10.669 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x4ff7 00:08:10.669 Nvme1n1p2 : 6.03 74.24 4.64 0.00 0.00 1519792.44 105664.20 1819682.66 00:08:10.669 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:10.669 Nvme1n1p2 : 6.04 92.74 5.80 0.00 0.00 1226447.91 136314.88 1845493.76 00:08:10.669 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x8000 00:08:10.669 Nvme2n1 : 6.12 121.24 7.58 0.00 0.00 905257.75 57268.38 1258291.20 00:08:10.669 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x8000 length 0x8000 00:08:10.669 Nvme2n1 : 6.22 94.40 5.90 0.00 0.00 1150846.17 147607.24 2142321.43 00:08:10.669 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x8000 00:08:10.669 Nvme2n2 : 6.12 125.45 7.84 0.00 0.00 849255.45 83079.48 1297007.85 00:08:10.669 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x8000 length 0x8000 00:08:10.669 Nvme2n2 : 6.34 104.93 6.56 0.00 0.00 1011045.59 29440.79 2155226.98 00:08:10.669 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x8000 00:08:10.669 Nvme2n3 : 6.28 139.27 8.70 0.00 0.00 740028.45 31457.28 1322818.95 00:08:10.669 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x8000 length 0x8000 00:08:10.669 Nvme2n3 : 6.34 108.52 6.78 0.00 0.00 940895.60 41338.09 1974549.27 00:08:10.669 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x0 length 0x2000 00:08:10.669 Nvme3n1 : 6.34 161.59 10.10 0.00 0.00 615430.54 1172.09 1348630.06 00:08:10.669 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:10.669 Verification LBA range: start 0x2000 length 0x2000 00:08:10.669 Nvme3n1 : 6.38 131.57 8.22 0.00 0.00 748040.26 582.89 2193943.63 00:08:10.669 [2024-11-27T00:30:47.456Z] =================================================================================================================== 00:08:10.669 [2024-11-27T00:30:47.456Z] Total : 1538.26 96.14 0.00 0.00 993517.35 582.89 2193943.63 00:08:12.046 00:08:12.046 real 0m8.648s 00:08:12.046 user 0m16.522s 00:08:12.046 sys 0m0.256s 00:08:12.046 00:30:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.046 ************************************ 00:08:12.046 END TEST bdev_verify_big_io 00:08:12.046 ************************************ 00:08:12.046 00:30:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:12.046 00:30:48 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.046 00:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:12.046 00:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.046 00:30:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.046 ************************************ 00:08:12.046 START TEST bdev_write_zeroes 00:08:12.046 ************************************ 00:08:12.046 00:30:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:12.304 [2024-11-27 00:30:48.866239] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:12.304 [2024-11-27 00:30:48.866357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74080 ] 00:08:12.304 [2024-11-27 00:30:49.019745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.304 [2024-11-27 00:30:49.049805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.870 Running I/O for 1 seconds... 00:08:13.807 64512.00 IOPS, 252.00 MiB/s 00:08:13.807 Latency(us) 00:08:13.807 [2024-11-27T00:30:50.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.807 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme0n1 : 1.02 9180.06 35.86 0.00 0.00 13911.03 9527.93 27625.94 00:08:13.807 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme1n1p1 : 1.03 9168.68 35.82 0.00 0.00 13912.39 9779.99 27021.00 00:08:13.807 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme1n1p2 : 1.03 9157.15 35.77 0.00 0.00 13892.96 9376.69 27827.59 00:08:13.807 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme2n1 : 1.03 9146.65 35.73 0.00 0.00 13854.68 9326.28 27625.94 00:08:13.807 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme2n2 : 1.03 9136.19 35.69 0.00 0.00 13853.21 9376.69 27827.59 00:08:13.807 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme2n3 : 1.03 9125.65 35.65 0.00 0.00 13814.09 8670.92 27827.59 00:08:13.807 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:13.807 Nvme3n1 : 1.03 9115.24 35.61 0.00 0.00 13813.10 8872.57 26819.35 00:08:13.807 [2024-11-27T00:30:50.594Z] =================================================================================================================== 00:08:13.807 [2024-11-27T00:30:50.594Z] Total : 64029.62 250.12 0.00 0.00 13864.49 8670.92 27827.59 00:08:14.069 00:08:14.069 real 0m1.876s 00:08:14.069 user 0m1.587s 00:08:14.069 sys 0m0.179s 00:08:14.069 00:30:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.069 ************************************ 00:08:14.069 END TEST bdev_write_zeroes 00:08:14.069 ************************************ 00:08:14.069 00:30:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:14.069 00:30:50 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.069 00:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:14.069 00:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.069 00:30:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.069 ************************************ 00:08:14.069 START TEST bdev_json_nonenclosed 00:08:14.069 ************************************ 00:08:14.069 00:30:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.069 [2024-11-27 00:30:50.778402] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:14.069 [2024-11-27 00:30:50.778508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74122 ] 00:08:14.338 [2024-11-27 00:30:50.936392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.338 [2024-11-27 00:30:50.959222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.338 [2024-11-27 00:30:50.959315] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:14.338 [2024-11-27 00:30:50.959334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:14.338 [2024-11-27 00:30:50.959346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.338 00:08:14.338 real 0m0.307s 00:08:14.338 user 0m0.119s 00:08:14.338 sys 0m0.083s 00:08:14.338 00:30:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.338 ************************************ 00:08:14.338 END TEST bdev_json_nonenclosed 00:08:14.338 ************************************ 00:08:14.338 00:30:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:14.338 00:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.338 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:14.338 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.338 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.338 ************************************ 00:08:14.338 START TEST bdev_json_nonarray 00:08:14.338 ************************************ 00:08:14.338 00:30:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:14.598 [2024-11-27 00:30:51.137953] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:14.598 [2024-11-27 00:30:51.138066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74142 ] 00:08:14.598 [2024-11-27 00:30:51.290009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.598 [2024-11-27 00:30:51.314003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.598 [2024-11-27 00:30:51.314093] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:14.598 [2024-11-27 00:30:51.314110] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:14.598 [2024-11-27 00:30:51.314122] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.860 00:08:14.860 real 0m0.310s 00:08:14.860 user 0m0.100s 00:08:14.860 sys 0m0.107s 00:08:14.860 00:30:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.860 00:30:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 ************************************ 00:08:14.860 END TEST bdev_json_nonarray 00:08:14.860 ************************************ 00:08:14.861 00:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:14.861 00:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:14.861 00:30:51 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:14.861 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.861 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.861 00:30:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.861 ************************************ 00:08:14.861 START TEST bdev_gpt_uuid 00:08:14.861 ************************************ 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74162 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74162 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74162 ']' 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:14.861 00:30:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:14.861 [2024-11-27 00:30:51.504291] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:14.861 [2024-11-27 00:30:51.504422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74162 ] 00:08:15.122 [2024-11-27 00:30:51.664254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.122 [2024-11-27 00:30:51.688334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.693 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.693 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:15.693 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:15.693 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.693 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:15.955 Some configs were skipped because the RPC state that can call them passed over. 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:15.955 { 00:08:15.955 "name": "Nvme1n1p1", 00:08:15.955 "aliases": [ 00:08:15.955 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:15.955 ], 00:08:15.955 "product_name": "GPT Disk", 00:08:15.955 "block_size": 4096, 00:08:15.955 "num_blocks": 655104, 00:08:15.955 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:15.955 "assigned_rate_limits": { 00:08:15.955 "rw_ios_per_sec": 0, 00:08:15.955 "rw_mbytes_per_sec": 0, 00:08:15.955 "r_mbytes_per_sec": 0, 00:08:15.955 "w_mbytes_per_sec": 0 00:08:15.955 }, 00:08:15.955 "claimed": false, 00:08:15.955 "zoned": false, 00:08:15.955 "supported_io_types": { 00:08:15.955 "read": true, 00:08:15.955 "write": true, 00:08:15.955 "unmap": true, 00:08:15.955 "flush": true, 00:08:15.955 "reset": true, 00:08:15.955 "nvme_admin": false, 00:08:15.955 "nvme_io": false, 00:08:15.955 "nvme_io_md": false, 00:08:15.955 "write_zeroes": true, 00:08:15.955 "zcopy": false, 00:08:15.955 "get_zone_info": false, 00:08:15.955 "zone_management": false, 00:08:15.955 "zone_append": false, 00:08:15.955 "compare": true, 00:08:15.955 "compare_and_write": false, 00:08:15.955 "abort": true, 00:08:15.955 "seek_hole": false, 00:08:15.955 "seek_data": false, 00:08:15.955 "copy": true, 00:08:15.955 "nvme_iov_md": false 00:08:15.955 }, 00:08:15.955 "driver_specific": { 00:08:15.955 "gpt": { 00:08:15.955 "base_bdev": "Nvme1n1", 00:08:15.955 "offset_blocks": 256, 00:08:15.955 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:15.955 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:15.955 "partition_name": "SPDK_TEST_first" 00:08:15.955 } 00:08:15.955 } 00:08:15.955 } 00:08:15.955 ]' 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:15.955 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:16.217 { 00:08:16.217 "name": "Nvme1n1p2", 00:08:16.217 "aliases": [ 00:08:16.217 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:16.217 ], 00:08:16.217 "product_name": "GPT Disk", 00:08:16.217 "block_size": 4096, 00:08:16.217 "num_blocks": 655103, 00:08:16.217 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:16.217 "assigned_rate_limits": { 00:08:16.217 "rw_ios_per_sec": 0, 00:08:16.217 "rw_mbytes_per_sec": 0, 00:08:16.217 "r_mbytes_per_sec": 0, 00:08:16.217 "w_mbytes_per_sec": 0 00:08:16.217 }, 00:08:16.217 "claimed": false, 00:08:16.217 "zoned": false, 00:08:16.217 "supported_io_types": { 00:08:16.217 "read": true, 00:08:16.217 "write": true, 00:08:16.217 "unmap": true, 00:08:16.217 "flush": true, 00:08:16.217 "reset": true, 00:08:16.217 "nvme_admin": false, 00:08:16.217 "nvme_io": false, 00:08:16.217 "nvme_io_md": false, 00:08:16.217 "write_zeroes": true, 00:08:16.217 "zcopy": false, 00:08:16.217 "get_zone_info": false, 00:08:16.217 "zone_management": false, 00:08:16.217 "zone_append": false, 00:08:16.217 "compare": true, 00:08:16.217 "compare_and_write": false, 00:08:16.217 "abort": true, 00:08:16.217 "seek_hole": false, 00:08:16.217 "seek_data": false, 00:08:16.217 "copy": true, 00:08:16.217 "nvme_iov_md": false 00:08:16.217 }, 00:08:16.217 "driver_specific": { 00:08:16.217 "gpt": { 00:08:16.217 "base_bdev": "Nvme1n1", 00:08:16.217 "offset_blocks": 655360, 00:08:16.217 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:16.217 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:16.217 "partition_name": "SPDK_TEST_second" 00:08:16.217 } 00:08:16.217 } 00:08:16.217 } 00:08:16.217 ]' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 74162 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74162 ']' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74162 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74162 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.217 killing process with pid 74162 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74162' 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74162 00:08:16.217 00:30:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74162 00:08:16.477 00:08:16.477 real 0m1.806s 00:08:16.477 user 0m1.929s 00:08:16.477 sys 0m0.370s 00:08:16.477 00:30:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.477 00:30:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:16.477 ************************************ 00:08:16.477 END TEST bdev_gpt_uuid 00:08:16.477 ************************************ 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:16.737 00:30:53 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:16.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:16.998 Waiting for block devices as requested 00:08:16.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.258 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.258 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.258 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.537 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:22.537 00:30:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:22.537 00:30:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:22.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:22.537 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:22.537 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:22.537 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:22.537 00:30:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:22.537 00:08:22.537 real 0m48.503s 00:08:22.537 user 1m1.984s 00:08:22.537 sys 0m7.865s 00:08:22.537 00:30:59 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.537 00:30:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.537 ************************************ 00:08:22.537 END TEST blockdev_nvme_gpt 00:08:22.537 ************************************ 00:08:22.797 00:30:59 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:22.797 00:30:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.797 00:30:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.797 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:08:22.797 ************************************ 00:08:22.797 START TEST nvme 00:08:22.797 ************************************ 00:08:22.797 00:30:59 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:22.797 * Looking for test storage... 00:08:22.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:22.797 00:30:59 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.797 00:30:59 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.797 00:30:59 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.797 00:30:59 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.797 00:30:59 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.797 00:30:59 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.797 00:30:59 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.797 00:30:59 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.797 00:30:59 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.797 00:30:59 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:22.797 00:30:59 nvme -- scripts/common.sh@345 -- # : 1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.797 00:30:59 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.797 00:30:59 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@353 -- # local d=1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.797 00:30:59 nvme -- scripts/common.sh@355 -- # echo 1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.797 00:30:59 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@353 -- # local d=2 00:08:22.797 00:30:59 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.797 00:30:59 nvme -- scripts/common.sh@355 -- # echo 2 00:08:22.798 00:30:59 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.798 00:30:59 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.798 00:30:59 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.798 00:30:59 nvme -- scripts/common.sh@368 -- # return 0 00:08:22.798 00:30:59 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.798 00:30:59 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.798 --rc genhtml_branch_coverage=1 00:08:22.798 --rc genhtml_function_coverage=1 00:08:22.798 --rc genhtml_legend=1 00:08:22.798 --rc geninfo_all_blocks=1 00:08:22.798 --rc geninfo_unexecuted_blocks=1 00:08:22.798 00:08:22.798 ' 00:08:22.798 00:30:59 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.798 --rc genhtml_branch_coverage=1 00:08:22.798 --rc genhtml_function_coverage=1 00:08:22.798 --rc genhtml_legend=1 00:08:22.798 --rc geninfo_all_blocks=1 00:08:22.798 --rc geninfo_unexecuted_blocks=1 00:08:22.798 00:08:22.798 ' 00:08:22.798 00:30:59 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.798 --rc genhtml_branch_coverage=1 00:08:22.798 --rc genhtml_function_coverage=1 00:08:22.798 --rc genhtml_legend=1 00:08:22.798 --rc geninfo_all_blocks=1 00:08:22.798 --rc geninfo_unexecuted_blocks=1 00:08:22.798 00:08:22.798 ' 00:08:22.798 00:30:59 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.798 --rc genhtml_branch_coverage=1 00:08:22.798 --rc genhtml_function_coverage=1 00:08:22.798 --rc genhtml_legend=1 00:08:22.798 --rc geninfo_all_blocks=1 00:08:22.798 --rc geninfo_unexecuted_blocks=1 00:08:22.798 00:08:22.798 ' 00:08:22.798 00:30:59 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:23.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.631 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.631 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.631 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.893 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:23.893 00:31:00 nvme -- nvme/nvme.sh@79 -- # uname 00:08:23.893 00:31:00 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:23.893 00:31:00 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:23.893 00:31:00 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:23.893 Waiting for stub to ready for secondary processes... 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1075 -- # stubpid=74788 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74788 ]] 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:23.893 00:31:00 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:23.893 [2024-11-27 00:31:00.541126] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:08:23.893 [2024-11-27 00:31:00.541238] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:24.836 [2024-11-27 00:31:01.490072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.836 [2024-11-27 00:31:01.506466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.836 [2024-11-27 00:31:01.507037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.836 [2024-11-27 00:31:01.507097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.836 00:31:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:24.836 00:31:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74788 ]] 00:08:24.836 00:31:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:24.836 [2024-11-27 00:31:01.523011] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:24.836 [2024-11-27 00:31:01.523139] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:24.836 [2024-11-27 00:31:01.535916] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:24.836 [2024-11-27 00:31:01.536129] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:24.836 [2024-11-27 00:31:01.537298] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:24.836 [2024-11-27 00:31:01.537561] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:24.836 [2024-11-27 00:31:01.537626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:24.836 [2024-11-27 00:31:01.538251] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:24.836 [2024-11-27 00:31:01.538552] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:24.836 [2024-11-27 00:31:01.538638] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:24.836 [2024-11-27 00:31:01.540101] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:24.836 [2024-11-27 00:31:01.540362] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:24.836 [2024-11-27 00:31:01.540435] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:24.836 [2024-11-27 00:31:01.540516] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:24.836 [2024-11-27 00:31:01.540586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:25.778 done. 00:08:25.778 00:31:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:25.778 00:31:02 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:25.778 00:31:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:25.778 00:31:02 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:25.778 00:31:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.778 00:31:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.778 ************************************ 00:08:25.778 START TEST nvme_reset 00:08:25.778 ************************************ 00:08:25.778 00:31:02 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:26.101 Initializing NVMe Controllers 00:08:26.101 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:26.101 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:26.101 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:26.101 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:26.101 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:26.101 00:08:26.101 real 0m0.210s 00:08:26.101 user 0m0.064s 00:08:26.101 sys 0m0.101s 00:08:26.101 ************************************ 00:08:26.101 END TEST nvme_reset 00:08:26.101 ************************************ 00:08:26.101 00:31:02 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.101 00:31:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:26.101 00:31:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:26.101 00:31:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.101 00:31:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.101 00:31:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.101 ************************************ 00:08:26.101 START TEST nvme_identify 00:08:26.101 ************************************ 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:26.101 00:31:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:26.101 00:31:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:26.101 00:31:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:26.101 00:31:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:26.101 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:26.372 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:26.372 00:31:02 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:26.372 00:31:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:26.372 [2024-11-27 00:31:03.050346] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74821 terminated unexpected 00:08:26.372 ===================================================== 00:08:26.372 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.372 ===================================================== 00:08:26.372 Controller Capabilities/Features 00:08:26.372 ================================ 00:08:26.372 Vendor ID: 1b36 00:08:26.372 Subsystem Vendor ID: 1af4 00:08:26.372 Serial Number: 12343 00:08:26.372 Model Number: QEMU NVMe Ctrl 00:08:26.372 Firmware Version: 8.0.0 00:08:26.372 Recommended Arb Burst: 6 00:08:26.372 IEEE OUI Identifier: 00 54 52 00:08:26.372 Multi-path I/O 00:08:26.372 May have multiple subsystem ports: No 00:08:26.372 May have multiple controllers: Yes 00:08:26.372 Associated with SR-IOV VF: No 00:08:26.372 Max Data Transfer Size: 524288 00:08:26.372 Max Number of Namespaces: 256 00:08:26.372 Max Number of I/O Queues: 64 00:08:26.372 NVMe Specification Version (VS): 1.4 00:08:26.372 NVMe Specification Version (Identify): 1.4 00:08:26.372 Maximum Queue Entries: 2048 00:08:26.372 Contiguous Queues Required: Yes 00:08:26.372 Arbitration Mechanisms Supported 00:08:26.372 Weighted Round Robin: Not Supported 00:08:26.372 Vendor Specific: Not Supported 00:08:26.372 Reset Timeout: 7500 ms 00:08:26.372 Doorbell Stride: 4 bytes 00:08:26.372 NVM Subsystem Reset: Not Supported 00:08:26.372 Command Sets Supported 00:08:26.372 NVM Command Set: Supported 00:08:26.372 Boot Partition: Not Supported 00:08:26.372 Memory Page Size Minimum: 4096 bytes 00:08:26.372 Memory Page Size Maximum: 65536 bytes 00:08:26.372 Persistent Memory Region: Not Supported 00:08:26.372 Optional Asynchronous Events Supported 00:08:26.372 Namespace Attribute Notices: Supported 00:08:26.372 Firmware Activation Notices: Not Supported 00:08:26.372 ANA Change Notices: Not Supported 00:08:26.372 PLE Aggregate Log Change Notices: Not Supported 00:08:26.372 LBA Status Info Alert Notices: Not Supported 00:08:26.372 EGE Aggregate Log Change Notices: Not Supported 00:08:26.372 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.372 Zone Descriptor Change Notices: Not Supported 00:08:26.372 Discovery Log Change Notices: Not Supported 00:08:26.372 Controller Attributes 00:08:26.372 128-bit Host Identifier: Not Supported 00:08:26.372 Non-Operational Permissive Mode: Not Supported 00:08:26.372 NVM Sets: Not Supported 00:08:26.372 Read Recovery Levels: Not Supported 00:08:26.372 Endurance Groups: Supported 00:08:26.372 Predictable Latency Mode: Not Supported 00:08:26.372 Traffic Based Keep ALive: Not Supported 00:08:26.372 Namespace Granularity: Not Supported 00:08:26.372 SQ Associations: Not Supported 00:08:26.372 UUID List: Not Supported 00:08:26.372 Multi-Domain Subsystem: Not Supported 00:08:26.372 Fixed Capacity Management: Not Supported 00:08:26.372 Variable Capacity Management: Not Supported 00:08:26.372 Delete Endurance Group: Not Supported 00:08:26.372 Delete NVM Set: Not Supported 00:08:26.372 Extended LBA Formats Supported: Supported 00:08:26.372 Flexible Data Placement Supported: Supported 00:08:26.372 00:08:26.372 Controller Memory Buffer Support 00:08:26.372 ================================ 00:08:26.372 Supported: No 00:08:26.372 00:08:26.372 Persistent Memory Region Support 00:08:26.372 ================================ 00:08:26.372 Supported: No 00:08:26.372 00:08:26.372 Admin Command Set Attributes 00:08:26.372 ============================ 00:08:26.372 Security Send/Receive: Not Supported 00:08:26.372 Format NVM: Supported 00:08:26.372 Firmware Activate/Download: Not Supported 00:08:26.372 Namespace Management: Supported 00:08:26.372 Device Self-Test: Not Supported 00:08:26.372 Directives: Supported 00:08:26.372 NVMe-MI: Not Supported 00:08:26.372 Virtualization Management: Not Supported 00:08:26.372 Doorbell Buffer Config: Supported 00:08:26.372 Get LBA Status Capability: Not Supported 00:08:26.372 Command & Feature Lockdown Capability: Not Supported 00:08:26.372 Abort Command Limit: 4 00:08:26.372 Async Event Request Limit: 4 00:08:26.372 Number of Firmware Slots: N/A 00:08:26.372 Firmware Slot 1 Read-Only: N/A 00:08:26.372 Firmware Activation Without Reset: N/A 00:08:26.372 Multiple Update Detection Support: N/A 00:08:26.372 Firmware Update Granularity: No Information Provided 00:08:26.372 Per-Namespace SMART Log: Yes 00:08:26.372 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.372 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:26.372 Command Effects Log Page: Supported 00:08:26.372 Get Log Page Extended Data: Supported 00:08:26.372 Telemetry Log Pages: Not Supported 00:08:26.372 Persistent Event Log Pages: Not Supported 00:08:26.372 Supported Log Pages Log Page: May Support 00:08:26.372 Commands Supported & Effects Log Page: Not Supported 00:08:26.372 Feature Identifiers & Effects Log Page:May Support 00:08:26.372 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.372 Data Area 4 for Telemetry Log: Not Supported 00:08:26.372 Error Log Page Entries Supported: 1 00:08:26.372 Keep Alive: Not Supported 00:08:26.372 00:08:26.372 NVM Command Set Attributes 00:08:26.372 ========================== 00:08:26.372 Submission Queue Entry Size 00:08:26.372 Max: 64 00:08:26.372 Min: 64 00:08:26.372 Completion Queue Entry Size 00:08:26.372 Max: 16 00:08:26.372 Min: 16 00:08:26.372 Number of Namespaces: 256 00:08:26.372 Compare Command: Supported 00:08:26.372 Write Uncorrectable Command: Not Supported 00:08:26.372 Dataset Management Command: Supported 00:08:26.372 Write Zeroes Command: Supported 00:08:26.372 Set Features Save Field: Supported 00:08:26.372 Reservations: Not Supported 00:08:26.372 Timestamp: Supported 00:08:26.372 Copy: Supported 00:08:26.372 Volatile Write Cache: Present 00:08:26.372 Atomic Write Unit (Normal): 1 00:08:26.372 Atomic Write Unit (PFail): 1 00:08:26.372 Atomic Compare & Write Unit: 1 00:08:26.372 Fused Compare & Write: Not Supported 00:08:26.372 Scatter-Gather List 00:08:26.372 SGL Command Set: Supported 00:08:26.372 SGL Keyed: Not Supported 00:08:26.372 SGL Bit Bucket Descriptor: Not Supported 00:08:26.372 SGL Metadata Pointer: Not Supported 00:08:26.372 Oversized SGL: Not Supported 00:08:26.372 SGL Metadata Address: Not Supported 00:08:26.372 SGL Offset: Not Supported 00:08:26.372 Transport SGL Data Block: Not Supported 00:08:26.372 Replay Protected Memory Block: Not Supported 00:08:26.372 00:08:26.372 Firmware Slot Information 00:08:26.372 ========================= 00:08:26.372 Active slot: 1 00:08:26.372 Slot 1 Firmware Revision: 1.0 00:08:26.372 00:08:26.372 00:08:26.372 Commands Supported and Effects 00:08:26.372 ============================== 00:08:26.372 Admin Commands 00:08:26.372 -------------- 00:08:26.372 Delete I/O Submission Queue (00h): Supported 00:08:26.372 Create I/O Submission Queue (01h): Supported 00:08:26.372 Get Log Page (02h): Supported 00:08:26.372 Delete I/O Completion Queue (04h): Supported 00:08:26.372 Create I/O Completion Queue (05h): Supported 00:08:26.372 Identify (06h): Supported 00:08:26.372 Abort (08h): Supported 00:08:26.372 Set Features (09h): Supported 00:08:26.372 Get Features (0Ah): Supported 00:08:26.372 Asynchronous Event Request (0Ch): Supported 00:08:26.372 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.372 Directive Send (19h): Supported 00:08:26.372 Directive Receive (1Ah): Supported 00:08:26.372 Virtualization Management (1Ch): Supported 00:08:26.372 Doorbell Buffer Config (7Ch): Supported 00:08:26.372 Format NVM (80h): Supported LBA-Change 00:08:26.372 I/O Commands 00:08:26.372 ------------ 00:08:26.372 Flush (00h): Supported LBA-Change 00:08:26.372 Write (01h): Supported LBA-Change 00:08:26.372 Read (02h): Supported 00:08:26.372 Compare (05h): Supported 00:08:26.372 Write Zeroes (08h): Supported LBA-Change 00:08:26.372 Dataset Management (09h): Supported LBA-Change 00:08:26.372 Unknown (0Ch): Supported 00:08:26.372 Unknown (12h): Supported 00:08:26.372 Copy (19h): Supported LBA-Change 00:08:26.372 Unknown (1Dh): Supported LBA-Change 00:08:26.372 00:08:26.372 Error Log 00:08:26.372 ========= 00:08:26.372 00:08:26.372 Arbitration 00:08:26.372 =========== 00:08:26.373 Arbitration Burst: no limit 00:08:26.373 00:08:26.373 Power Management 00:08:26.373 ================ 00:08:26.373 Number of Power States: 1 00:08:26.373 Current Power State: Power State #0 00:08:26.373 Power State #0: 00:08:26.373 Max Power: 25.00 W 00:08:26.373 Non-Operational State: Operational 00:08:26.373 Entry Latency: 16 microseconds 00:08:26.373 Exit Latency: 4 microseconds 00:08:26.373 Relative Read Throughput: 0 00:08:26.373 Relative Read Latency: 0 00:08:26.373 Relative Write Throughput: 0 00:08:26.373 Relative Write Latency: 0 00:08:26.373 Idle Power: Not Reported 00:08:26.373 Active Power: Not Reported 00:08:26.373 Non-Operational Permissive Mode: Not Supported 00:08:26.373 00:08:26.373 Health Information 00:08:26.373 ================== 00:08:26.373 Critical Warnings: 00:08:26.373 Available Spare Space: OK 00:08:26.373 Temperature: OK 00:08:26.373 Device Reliability: OK 00:08:26.373 Read Only: No 00:08:26.373 Volatile Memory Backup: OK 00:08:26.373 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.373 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.373 Available Spare: 0% 00:08:26.373 Available Spare Threshold: 0% 00:08:26.373 Life Percentage Used: 0% 00:08:26.373 Data Units Read: 854 00:08:26.373 Data Units Written: 783 00:08:26.373 Host Read Commands: 38129 00:08:26.373 Host Write Commands: 37552 00:08:26.373 Controller Busy Time: 0 minutes 00:08:26.373 Power Cycles: 0 00:08:26.373 Power On Hours: 0 hours 00:08:26.373 Unsafe Shutdowns: 0 00:08:26.373 Unrecoverable Media Errors: 0 00:08:26.373 Lifetime Error Log Entries: 0 00:08:26.373 Warning Temperature Time: 0 minutes 00:08:26.373 Critical Temperature Time: 0 minutes 00:08:26.373 00:08:26.373 Number of Queues 00:08:26.373 ================ 00:08:26.373 Number of I/O Submission Queues: 64 00:08:26.373 Number of I/O Completion Queues: 64 00:08:26.373 00:08:26.373 ZNS Specific Controller Data 00:08:26.373 ============================ 00:08:26.373 Zone Append Size Limit: 0 00:08:26.373 00:08:26.373 00:08:26.373 Active Namespaces 00:08:26.373 ================= 00:08:26.373 Namespace ID:1 00:08:26.373 Error Recovery Timeout: Unlimited 00:08:26.373 Command Set Identifier: NVM (00h) 00:08:26.373 Deallocate: Supported 00:08:26.373 Deallocated/Unwritten Error: Supported 00:08:26.373 Deallocated Read Value: All 0x00 00:08:26.373 Deallocate in Write Zeroes: Not Supported 00:08:26.373 Deallocated Guard Field: 0xFFFF 00:08:26.373 Flush: Supported 00:08:26.373 Reservation: Not Supported 00:08:26.373 Namespace Sharing Capabilities: Multiple Controllers 00:08:26.373 Size (in LBAs): 262144 (1GiB) 00:08:26.373 Capacity (in LBAs): 262144 (1GiB) 00:08:26.373 Utilization (in LBAs): 262144 (1GiB) 00:08:26.373 Thin Provisioning: Not Supported 00:08:26.373 Per-NS Atomic Units: No 00:08:26.373 Maximum Single Source Range Length: 128 00:08:26.373 Maximum Copy Length: 128 00:08:26.373 Maximum Source Range Count: 128 00:08:26.373 NGUID/EUI64 Never Reused: No 00:08:26.373 Namespace Write Protected: No 00:08:26.373 Endurance group ID: 1 00:08:26.373 Number of LBA Formats: 8 00:08:26.373 Current LBA Format: LBA Format #04 00:08:26.373 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.373 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.373 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.373 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.373 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.373 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.373 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.373 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.373 00:08:26.373 Get Feature FDP: 00:08:26.373 ================ 00:08:26.373 Enabled: Yes 00:08:26.373 FDP configuration index: 0 00:08:26.373 00:08:26.373 FDP configurations log page 00:08:26.373 =========================== 00:08:26.373 Number of FDP configurations: 1 00:08:26.373 Version: 0 00:08:26.373 Size: 112 00:08:26.373 FDP Configuration Descriptor: 0 00:08:26.373 Descriptor Size: 96 00:08:26.373 Reclaim Group Identifier format: 2 00:08:26.373 FDP Volatile Write Cache: Not Present 00:08:26.373 FDP Configuration: Valid 00:08:26.373 Vendor Specific Size: 0 00:08:26.373 Number of Reclaim Groups: 2 00:08:26.373 Number of Recalim Unit Handles: 8 00:08:26.373 Max Placement Identifiers: 128 00:08:26.373 Number of Namespaces Suppprted: 256 00:08:26.373 Reclaim unit Nominal Size: 6000000 bytes 00:08:26.373 Estimated Reclaim Unit Time Limit: Not Reported 00:08:26.373 RUH Desc #000: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #001: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #002: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #003: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #004: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #005: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #006: RUH Type: Initially Isolated 00:08:26.373 RUH Desc #007: RUH Type: Initially Isolated 00:08:26.373 00:08:26.373 FDP reclaim unit handle usage log page 00:08:26.373 ==================================[2024-11-27 00:31:03.052267] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74821 terminated unexpected 00:08:26.373 ==== 00:08:26.373 Number of Reclaim Unit Handles: 8 00:08:26.373 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:26.373 RUH Usage Desc #001: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #002: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #003: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #004: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #005: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #006: RUH Attributes: Unused 00:08:26.373 RUH Usage Desc #007: RUH Attributes: Unused 00:08:26.373 00:08:26.373 FDP statistics log page 00:08:26.373 ======================= 00:08:26.373 Host bytes with metadata written: 497393664 00:08:26.373 Media bytes with metadata written: 497446912 00:08:26.373 Media bytes erased: 0 00:08:26.373 00:08:26.373 FDP events log page 00:08:26.373 =================== 00:08:26.373 Number of FDP events: 0 00:08:26.373 00:08:26.373 NVM Specific Namespace Data 00:08:26.373 =========================== 00:08:26.373 Logical Block Storage Tag Mask: 0 00:08:26.373 Protection Information Capabilities: 00:08:26.373 16b Guard Protection Information Storage Tag Support: No 00:08:26.373 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.373 Storage Tag Check Read Support: No 00:08:26.373 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.373 ===================================================== 00:08:26.373 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.373 ===================================================== 00:08:26.373 Controller Capabilities/Features 00:08:26.373 ================================ 00:08:26.373 Vendor ID: 1b36 00:08:26.373 Subsystem Vendor ID: 1af4 00:08:26.373 Serial Number: 12340 00:08:26.373 Model Number: QEMU NVMe Ctrl 00:08:26.373 Firmware Version: 8.0.0 00:08:26.373 Recommended Arb Burst: 6 00:08:26.373 IEEE OUI Identifier: 00 54 52 00:08:26.373 Multi-path I/O 00:08:26.373 May have multiple subsystem ports: No 00:08:26.373 May have multiple controllers: No 00:08:26.373 Associated with SR-IOV VF: No 00:08:26.373 Max Data Transfer Size: 524288 00:08:26.373 Max Number of Namespaces: 256 00:08:26.373 Max Number of I/O Queues: 64 00:08:26.373 NVMe Specification Version (VS): 1.4 00:08:26.373 NVMe Specification Version (Identify): 1.4 00:08:26.373 Maximum Queue Entries: 2048 00:08:26.373 Contiguous Queues Required: Yes 00:08:26.373 Arbitration Mechanisms Supported 00:08:26.373 Weighted Round Robin: Not Supported 00:08:26.373 Vendor Specific: Not Supported 00:08:26.373 Reset Timeout: 7500 ms 00:08:26.373 Doorbell Stride: 4 bytes 00:08:26.373 NVM Subsystem Reset: Not Supported 00:08:26.373 Command Sets Supported 00:08:26.373 NVM Command Set: Supported 00:08:26.373 Boot Partition: Not Supported 00:08:26.373 Memory Page Size Minimum: 4096 bytes 00:08:26.373 Memory Page Size Maximum: 65536 bytes 00:08:26.373 Persistent Memory Region: Not Supported 00:08:26.374 Optional Asynchronous Events Supported 00:08:26.374 Namespace Attribute Notices: Supported 00:08:26.374 Firmware Activation Notices: Not Supported 00:08:26.374 ANA Change Notices: Not Supported 00:08:26.374 PLE Aggregate Log Change Notices: Not Supported 00:08:26.374 LBA Status Info Alert Notices: Not Supported 00:08:26.374 EGE Aggregate Log Change Notices: Not Supported 00:08:26.374 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.374 Zone Descriptor Change Notices: Not Supported 00:08:26.374 Discovery Log Change Notices: Not Supported 00:08:26.374 Controller Attributes 00:08:26.374 128-bit Host Identifier: Not Supported 00:08:26.374 Non-Operational Permissive Mode: Not Supported 00:08:26.374 NVM Sets: Not Supported 00:08:26.374 Read Recovery Levels: Not Supported 00:08:26.374 Endurance Groups: Not Supported 00:08:26.374 Predictable Latency Mode: Not Supported 00:08:26.374 Traffic Based Keep ALive: Not Supported 00:08:26.374 Namespace Granularity: Not Supported 00:08:26.374 SQ Associations: Not Supported 00:08:26.374 UUID List: Not Supported 00:08:26.374 Multi-Domain Subsystem: Not Supported 00:08:26.374 Fixed Capacity Management: Not Supported 00:08:26.374 Variable Capacity Management: Not Supported 00:08:26.374 Delete Endurance Group: Not Supported 00:08:26.374 Delete NVM Set: Not Supported 00:08:26.374 Extended LBA Formats Supported: Supported 00:08:26.374 Flexible Data Placement Supported: Not Supported 00:08:26.374 00:08:26.374 Controller Memory Buffer Support 00:08:26.374 ================================ 00:08:26.374 Supported: No 00:08:26.374 00:08:26.374 Persistent Memory Region Support 00:08:26.374 ================================ 00:08:26.374 Supported: No 00:08:26.374 00:08:26.374 Admin Command Set Attributes 00:08:26.374 ============================ 00:08:26.374 Security Send/Receive: Not Supported 00:08:26.374 Format NVM: Supported 00:08:26.374 Firmware Activate/Download: Not Supported 00:08:26.374 Namespace Management: Supported 00:08:26.374 Device Self-Test: Not Supported 00:08:26.374 Directives: Supported 00:08:26.374 NVMe-MI: Not Supported 00:08:26.374 Virtualization Management: Not Supported 00:08:26.374 Doorbell Buffer Config: Supported 00:08:26.374 Get LBA Status Capability: Not Supported 00:08:26.374 Command & Feature Lockdown Capability: Not Supported 00:08:26.374 Abort Command Limit: 4 00:08:26.374 Async Event Request Limit: 4 00:08:26.374 Number of Firmware Slots: N/A 00:08:26.374 Firmware Slot 1 Read-Only: N/A 00:08:26.374 Firmware Activation Without Reset: N/A 00:08:26.374 Multiple Update Detection Support: N/A 00:08:26.374 Firmware Update Granularity: No Information Provided 00:08:26.374 Per-Namespace SMART Log: Yes 00:08:26.374 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.374 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:26.374 Command Effects Log Page: Supported 00:08:26.374 Get Log Page Extended Data: Supported 00:08:26.374 Telemetry Log Pages: Not Supported 00:08:26.374 Persistent Event Log Pages: Not Supported 00:08:26.374 Supported Log Pages Log Page: May Support 00:08:26.374 Commands Supported & Effects Log Page: Not Supported 00:08:26.374 Feature Identifiers & Effects Log Page:May Support 00:08:26.374 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.374 Data Area 4 for Telemetry Log: Not Supported 00:08:26.374 Error Log Page Entries Supported: 1 00:08:26.374 Keep Alive: Not Supported 00:08:26.374 00:08:26.374 NVM Command Set Attributes 00:08:26.374 ========================== 00:08:26.374 Submission Queue Entry Size 00:08:26.374 Max: 64 00:08:26.374 Min: 64 00:08:26.374 Completion Queue Entry Size 00:08:26.374 Max: 16 00:08:26.374 Min: 16 00:08:26.374 Number of Namespaces: 256 00:08:26.374 Compare Command: Supported 00:08:26.374 Write Uncorrectable Command: Not Supported 00:08:26.374 Dataset Management Command: Supported 00:08:26.374 Write Zeroes Command: Supported 00:08:26.374 Set Features Save Field: Supported 00:08:26.374 Reservations: Not Supported 00:08:26.374 Timestamp: Supported 00:08:26.374 Copy: Supported 00:08:26.374 Volatile Write Cache: Present 00:08:26.374 Atomic Write Unit (Normal): 1 00:08:26.374 Atomic Write Unit (PFail): 1 00:08:26.374 Atomic Compare & Write Unit: 1 00:08:26.374 Fused Compare & Write: Not Supported 00:08:26.374 Scatter-Gather List 00:08:26.374 SGL Command Set: Supported 00:08:26.374 SGL Keyed: Not Supported 00:08:26.374 SGL Bit Bucket Descriptor: Not Supported 00:08:26.374 SGL Metadata Pointer: Not Supported 00:08:26.374 Oversized SGL: Not Supported 00:08:26.374 SGL Metadata Address: Not Supported 00:08:26.374 SGL Offset: Not Supported 00:08:26.374 Transport SGL Data Block: Not Supported 00:08:26.374 Replay Protected Memory Block: Not Supported 00:08:26.374 00:08:26.374 Firmware Slot Information 00:08:26.374 ========================= 00:08:26.374 Active slot: 1 00:08:26.374 Slot 1 Firmware Revision: 1.0 00:08:26.374 00:08:26.374 00:08:26.374 Commands Supported and Effects 00:08:26.374 ============================== 00:08:26.374 Admin Commands 00:08:26.374 -------------- 00:08:26.374 Delete I/O Submission Queue (00h): Supported 00:08:26.374 Create I/O Submission Queue (01h): Supported 00:08:26.374 Get Log Page (02h): Supported 00:08:26.374 Delete I/O Completion Queue (04h): Supported 00:08:26.374 Create I/O Completion Queue (05h): Supported 00:08:26.374 Identify (06h): Supported 00:08:26.374 Abort (08h): Supported 00:08:26.374 Set Features (09h): Supported 00:08:26.374 Get Features (0Ah): Supported 00:08:26.374 Asynchronous Event Request (0Ch): Supported 00:08:26.374 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.374 Directive Send (19h): Supported 00:08:26.374 Directive Receive (1Ah): Supported 00:08:26.374 Virtualization Management (1Ch): Supported 00:08:26.374 Doorbell Buffer Config (7Ch): Supported 00:08:26.374 Format NVM (80h): Supported LBA-Change 00:08:26.374 I/O Commands 00:08:26.374 ------------ 00:08:26.374 Flush (00h): Supported LBA-Change 00:08:26.374 Write (01h): Supported LBA-Change 00:08:26.374 Read (02h): Supported 00:08:26.374 Compare (05h): Supported 00:08:26.374 Write Zeroes (08h): Supported LBA-Change 00:08:26.374 Dataset Management (09h): Supported LBA-Change 00:08:26.374 Unknown (0Ch): Supported 00:08:26.374 Unknown (12h): Supported 00:08:26.374 Copy (19h): Supported LBA-Change 00:08:26.374 Unknown (1Dh): Supported LBA-Change 00:08:26.374 00:08:26.374 Error Log 00:08:26.374 ========= 00:08:26.374 00:08:26.374 Arbitration 00:08:26.374 =========== 00:08:26.374 Arbitration Burst: no limit 00:08:26.374 00:08:26.374 Power Management 00:08:26.374 ================ 00:08:26.374 Number of Power States: 1 00:08:26.374 Current Power State: Power State #0 00:08:26.374 Power State #0: 00:08:26.374 Max Power: 25.00 W 00:08:26.374 Non-Operational State: Operational 00:08:26.374 Entry Latency: 16 microseconds 00:08:26.374 Exit Latency: 4 microseconds 00:08:26.374 Relative Read Throughput: 0 00:08:26.374 Relative Read Latency: 0 00:08:26.374 Relative Write Throughput: 0 00:08:26.374 Relative Write Latency: 0 00:08:26.374 Idle Power: Not Reported 00:08:26.374 Active Power: Not Reported 00:08:26.374 Non-Operational Permissive Mode: Not Supported 00:08:26.374 00:08:26.374 Health Information 00:08:26.374 ================== 00:08:26.374 Critical Warnings: 00:08:26.374 Available Spare Space: OK 00:08:26.374 Temperature: OK 00:08:26.374 Device Reliability: OK 00:08:26.374 Read Only: No 00:08:26.374 Volatile Memory Backup: OK 00:08:26.374 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.374 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.374 Available Spare: 0% 00:08:26.374 Available Spare Threshold: 0% 00:08:26.374 Life Percentage Used: 0% 00:08:26.374 Data Units Read: 675 00:08:26.374 Data Units Written: 604 00:08:26.374 Host Read Commands: 36436 00:08:26.374 Host Write Commands: 36222 00:08:26.374 Controller Busy Time: 0 minutes 00:08:26.374 Power Cycles: 0 00:08:26.374 Power On Hours: 0 hours 00:08:26.374 Unsafe Shutdowns: 0 00:08:26.374 Unrecoverable Media Errors: 0 00:08:26.374 Lifetime Error Log Entries: 0 00:08:26.374 Warning Temperature Time: 0 minutes 00:08:26.374 Critical Temperature Time: 0 minutes 00:08:26.374 00:08:26.374 Number of Queues 00:08:26.374 ================ 00:08:26.374 Number of I/O Submission Queues: 64 00:08:26.374 Number of I/O Completion Queues: 64 00:08:26.374 00:08:26.374 ZNS Specific Controller Data 00:08:26.374 ============================ 00:08:26.374 Zone Append Size Limit: 0 00:08:26.374 00:08:26.374 00:08:26.374 Active Namespaces 00:08:26.374 ================= 00:08:26.374 Namespace ID:1 00:08:26.374 Error Recovery Timeout: Unlimited 00:08:26.375 Command Set Identifier: NVM (00h) 00:08:26.375 Deallocate: Supported 00:08:26.375 Deallocated/Unwritten Error: Supported 00:08:26.375 Deallocated Read Value: All 0x00 00:08:26.375 Deallocate in Write Zeroes: Not Supported 00:08:26.375 Deallocated Guard Field: 0xFFFF 00:08:26.375 Flush: Supported 00:08:26.375 Reservation: Not Supported 00:08:26.375 Metadata Transferred as: Separate Metadata Buffer 00:08:26.375 Namespace Sharing Capabilities: Private 00:08:26.375 Size (in LBAs): 1548666 (5GiB) 00:08:26.375 Capacity (in LBAs): 1548666 (5GiB) 00:08:26.375 Utilization (in LBAs): 1548666 (5GiB) 00:08:26.375 Thin Provisioning: Not Supported 00:08:26.375 Per-NS Atomic Units: No 00:08:26.375 Maximum Single Source Range Length: 128 00:08:26.375 Maximum Copy Length: 128 00:08:26.375 Maximum Source Range Count: 128 00:08:26.375 NGUID/EUI64 Never Reused: No 00:08:26.375 Namespace Write Protected: No 00:08:26.375 Number of LBA Formats: 8 00:08:26.375 Current LBA Format: [2024-11-27 00:31:03.053086] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74821 terminated unexpected 00:08:26.375 LBA Format #07 00:08:26.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.375 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.375 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.375 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.375 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.375 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.375 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.375 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.375 00:08:26.375 NVM Specific Namespace Data 00:08:26.375 =========================== 00:08:26.375 Logical Block Storage Tag Mask: 0 00:08:26.375 Protection Information Capabilities: 00:08:26.375 16b Guard Protection Information Storage Tag Support: No 00:08:26.375 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.375 Storage Tag Check Read Support: No 00:08:26.375 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.375 ===================================================== 00:08:26.375 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.375 ===================================================== 00:08:26.375 Controller Capabilities/Features 00:08:26.375 ================================ 00:08:26.375 Vendor ID: 1b36 00:08:26.375 Subsystem Vendor ID: 1af4 00:08:26.375 Serial Number: 12341 00:08:26.375 Model Number: QEMU NVMe Ctrl 00:08:26.375 Firmware Version: 8.0.0 00:08:26.375 Recommended Arb Burst: 6 00:08:26.375 IEEE OUI Identifier: 00 54 52 00:08:26.375 Multi-path I/O 00:08:26.375 May have multiple subsystem ports: No 00:08:26.375 May have multiple controllers: No 00:08:26.375 Associated with SR-IOV VF: No 00:08:26.375 Max Data Transfer Size: 524288 00:08:26.375 Max Number of Namespaces: 256 00:08:26.375 Max Number of I/O Queues: 64 00:08:26.375 NVMe Specification Version (VS): 1.4 00:08:26.375 NVMe Specification Version (Identify): 1.4 00:08:26.375 Maximum Queue Entries: 2048 00:08:26.375 Contiguous Queues Required: Yes 00:08:26.375 Arbitration Mechanisms Supported 00:08:26.375 Weighted Round Robin: Not Supported 00:08:26.375 Vendor Specific: Not Supported 00:08:26.375 Reset Timeout: 7500 ms 00:08:26.375 Doorbell Stride: 4 bytes 00:08:26.375 NVM Subsystem Reset: Not Supported 00:08:26.375 Command Sets Supported 00:08:26.375 NVM Command Set: Supported 00:08:26.375 Boot Partition: Not Supported 00:08:26.375 Memory Page Size Minimum: 4096 bytes 00:08:26.375 Memory Page Size Maximum: 65536 bytes 00:08:26.375 Persistent Memory Region: Not Supported 00:08:26.375 Optional Asynchronous Events Supported 00:08:26.375 Namespace Attribute Notices: Supported 00:08:26.375 Firmware Activation Notices: Not Supported 00:08:26.375 ANA Change Notices: Not Supported 00:08:26.375 PLE Aggregate Log Change Notices: Not Supported 00:08:26.375 LBA Status Info Alert Notices: Not Supported 00:08:26.375 EGE Aggregate Log Change Notices: Not Supported 00:08:26.375 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.375 Zone Descriptor Change Notices: Not Supported 00:08:26.375 Discovery Log Change Notices: Not Supported 00:08:26.375 Controller Attributes 00:08:26.375 128-bit Host Identifier: Not Supported 00:08:26.375 Non-Operational Permissive Mode: Not Supported 00:08:26.375 NVM Sets: Not Supported 00:08:26.375 Read Recovery Levels: Not Supported 00:08:26.375 Endurance Groups: Not Supported 00:08:26.375 Predictable Latency Mode: Not Supported 00:08:26.375 Traffic Based Keep ALive: Not Supported 00:08:26.375 Namespace Granularity: Not Supported 00:08:26.375 SQ Associations: Not Supported 00:08:26.375 UUID List: Not Supported 00:08:26.375 Multi-Domain Subsystem: Not Supported 00:08:26.375 Fixed Capacity Management: Not Supported 00:08:26.375 Variable Capacity Management: Not Supported 00:08:26.375 Delete Endurance Group: Not Supported 00:08:26.375 Delete NVM Set: Not Supported 00:08:26.375 Extended LBA Formats Supported: Supported 00:08:26.375 Flexible Data Placement Supported: Not Supported 00:08:26.375 00:08:26.375 Controller Memory Buffer Support 00:08:26.375 ================================ 00:08:26.375 Supported: No 00:08:26.375 00:08:26.375 Persistent Memory Region Support 00:08:26.375 ================================ 00:08:26.375 Supported: No 00:08:26.375 00:08:26.375 Admin Command Set Attributes 00:08:26.375 ============================ 00:08:26.375 Security Send/Receive: Not Supported 00:08:26.375 Format NVM: Supported 00:08:26.375 Firmware Activate/Download: Not Supported 00:08:26.375 Namespace Management: Supported 00:08:26.375 Device Self-Test: Not Supported 00:08:26.375 Directives: Supported 00:08:26.375 NVMe-MI: Not Supported 00:08:26.375 Virtualization Management: Not Supported 00:08:26.375 Doorbell Buffer Config: Supported 00:08:26.375 Get LBA Status Capability: Not Supported 00:08:26.375 Command & Feature Lockdown Capability: Not Supported 00:08:26.375 Abort Command Limit: 4 00:08:26.375 Async Event Request Limit: 4 00:08:26.375 Number of Firmware Slots: N/A 00:08:26.375 Firmware Slot 1 Read-Only: N/A 00:08:26.375 Firmware Activation Without Reset: N/A 00:08:26.375 Multiple Update Detection Support: N/A 00:08:26.375 Firmware Update Granularity: No Information Provided 00:08:26.375 Per-Namespace SMART Log: Yes 00:08:26.375 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.375 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:26.375 Command Effects Log Page: Supported 00:08:26.375 Get Log Page Extended Data: Supported 00:08:26.375 Telemetry Log Pages: Not Supported 00:08:26.375 Persistent Event Log Pages: Not Supported 00:08:26.375 Supported Log Pages Log Page: May Support 00:08:26.375 Commands Supported & Effects Log Page: Not Supported 00:08:26.375 Feature Identifiers & Effects Log Page:May Support 00:08:26.375 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.375 Data Area 4 for Telemetry Log: Not Supported 00:08:26.375 Error Log Page Entries Supported: 1 00:08:26.375 Keep Alive: Not Supported 00:08:26.375 00:08:26.375 NVM Command Set Attributes 00:08:26.375 ========================== 00:08:26.375 Submission Queue Entry Size 00:08:26.375 Max: 64 00:08:26.375 Min: 64 00:08:26.375 Completion Queue Entry Size 00:08:26.375 Max: 16 00:08:26.375 Min: 16 00:08:26.375 Number of Namespaces: 256 00:08:26.375 Compare Command: Supported 00:08:26.375 Write Uncorrectable Command: Not Supported 00:08:26.375 Dataset Management Command: Supported 00:08:26.375 Write Zeroes Command: Supported 00:08:26.375 Set Features Save Field: Supported 00:08:26.375 Reservations: Not Supported 00:08:26.375 Timestamp: Supported 00:08:26.375 Copy: Supported 00:08:26.375 Volatile Write Cache: Present 00:08:26.375 Atomic Write Unit (Normal): 1 00:08:26.375 Atomic Write Unit (PFail): 1 00:08:26.375 Atomic Compare & Write Unit: 1 00:08:26.375 Fused Compare & Write: Not Supported 00:08:26.375 Scatter-Gather List 00:08:26.375 SGL Command Set: Supported 00:08:26.375 SGL Keyed: Not Supported 00:08:26.375 SGL Bit Bucket Descriptor: Not Supported 00:08:26.375 SGL Metadata Pointer: Not Supported 00:08:26.375 Oversized SGL: Not Supported 00:08:26.375 SGL Metadata Address: Not Supported 00:08:26.375 SGL Offset: Not Supported 00:08:26.375 Transport SGL Data Block: Not Supported 00:08:26.375 Replay Protected Memory Block: Not Supported 00:08:26.375 00:08:26.376 Firmware Slot Information 00:08:26.376 ========================= 00:08:26.376 Active slot: 1 00:08:26.376 Slot 1 Firmware Revision: 1.0 00:08:26.376 00:08:26.376 00:08:26.376 Commands Supported and Effects 00:08:26.376 ============================== 00:08:26.376 Admin Commands 00:08:26.376 -------------- 00:08:26.376 Delete I/O Submission Queue (00h): Supported 00:08:26.376 Create I/O Submission Queue (01h): Supported 00:08:26.376 Get Log Page (02h): Supported 00:08:26.376 Delete I/O Completion Queue (04h): Supported 00:08:26.376 Create I/O Completion Queue (05h): Supported 00:08:26.376 Identify (06h): Supported 00:08:26.376 Abort (08h): Supported 00:08:26.376 Set Features (09h): Supported 00:08:26.376 Get Features (0Ah): Supported 00:08:26.376 Asynchronous Event Request (0Ch): Supported 00:08:26.376 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.376 Directive Send (19h): Supported 00:08:26.376 Directive Receive (1Ah): Supported 00:08:26.376 Virtualization Management (1Ch): Supported 00:08:26.376 Doorbell Buffer Config (7Ch): Supported 00:08:26.376 Format NVM (80h): Supported LBA-Change 00:08:26.376 I/O Commands 00:08:26.376 ------------ 00:08:26.376 Flush (00h): Supported LBA-Change 00:08:26.376 Write (01h): Supported LBA-Change 00:08:26.376 Read (02h): Supported 00:08:26.376 Compare (05h): Supported 00:08:26.376 Write Zeroes (08h): Supported LBA-Change 00:08:26.376 Dataset Management (09h): Supported LBA-Change 00:08:26.376 Unknown (0Ch): Supported 00:08:26.376 Unknown (12h): Supported 00:08:26.376 Copy (19h): Supported LBA-Change 00:08:26.376 Unknown (1Dh): Supported LBA-Change 00:08:26.376 00:08:26.376 Error Log 00:08:26.376 ========= 00:08:26.376 00:08:26.376 Arbitration 00:08:26.376 =========== 00:08:26.376 Arbitration Burst: no limit 00:08:26.376 00:08:26.376 Power Management 00:08:26.376 ================ 00:08:26.376 Number of Power States: 1 00:08:26.376 Current Power State: Power State #0 00:08:26.376 Power State #0: 00:08:26.376 Max Power: 25.00 W 00:08:26.376 Non-Operational State: Operational 00:08:26.376 Entry Latency: 16 microseconds 00:08:26.376 Exit Latency: 4 microseconds 00:08:26.376 Relative Read Throughput: 0 00:08:26.376 Relative Read Latency: 0 00:08:26.376 Relative Write Throughput: 0 00:08:26.376 Relative Write Latency: 0 00:08:26.376 Idle Power: Not Reported 00:08:26.376 Active Power: Not Reported 00:08:26.376 Non-Operational Permissive Mode: Not Supported 00:08:26.376 00:08:26.376 Health Information 00:08:26.376 ================== 00:08:26.376 Critical Warnings: 00:08:26.376 Available Spare Space: OK 00:08:26.376 Temperature: OK 00:08:26.376 Device Reliability: OK 00:08:26.376 Read Only: No 00:08:26.376 Volatile Memory Backup: OK 00:08:26.376 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.376 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.376 Available Spare: 0% 00:08:26.376 Available Spare Threshold: 0% 00:08:26.376 Life Percentage Used: 0% 00:08:26.376 Data Units Read: 991 00:08:26.376 Data Units Written: 858 00:08:26.376 Host Read Commands: 53751 00:08:26.376 Host Write Commands: 52552 00:08:26.376 Controller Busy Time: 0 minutes 00:08:26.376 Power Cycles: 0 00:08:26.376 Power On Hours: 0 hours 00:08:26.376 Unsafe Shutdowns: 0 00:08:26.376 Unrecoverable Media Errors: 0 00:08:26.376 Lifetime Error Log Entries: 0 00:08:26.376 Warning Temperature Time: 0 minutes 00:08:26.376 Critical Temperature Time: 0 minutes 00:08:26.376 00:08:26.376 Number of Queues 00:08:26.376 ================ 00:08:26.376 Number of I/O Submission Queues: 64 00:08:26.376 Number of I/O Completion Queues: 64 00:08:26.376 00:08:26.376 ZNS Specific Controller Data 00:08:26.376 ============================ 00:08:26.376 Zone Append Size Limit: 0 00:08:26.376 00:08:26.376 00:08:26.376 Active Namespaces 00:08:26.376 ================= 00:08:26.376 Namespace ID:1 00:08:26.376 Error Recovery Timeout: Unlimited 00:08:26.376 Command Set Identifier: NVM (00h) 00:08:26.376 Deallocate: Supported 00:08:26.376 Deallocated/Unwritten Error: Supported 00:08:26.376 Deallocated Read Value: All 0x00 00:08:26.376 Deallocate in Write Zeroes: Not Supported 00:08:26.376 Deallocated Guard Field: 0xFFFF 00:08:26.376 Flush: Supported 00:08:26.376 Reservation: Not Supported 00:08:26.376 Namespace Sharing Capabilities: Private 00:08:26.376 Size (in LBAs): 1310720 (5GiB) 00:08:26.376 Capacity (in LBAs): 1310720 (5GiB) 00:08:26.376 Utilization (in LBAs): 1310720 (5GiB) 00:08:26.376 Thin Provisioning: Not Supported 00:08:26.376 Per-NS Atomic Units: No 00:08:26.376 Maximum Single Source Range Length: 128 00:08:26.376 Maximum Copy Length: 128 00:08:26.376 Maximum Source Range Count: 128 00:08:26.376 NGUID/EUI64 Never Reused: No 00:08:26.376 Namespace Write Protected: No 00:08:26.376 Number of LBA Formats: 8 00:08:26.376 Current LBA Format: LBA Format #04 00:08:26.376 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.376 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.376 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.376 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.376 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.376 LBA Format[2024-11-27 00:31:03.053933] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74821 terminated unexpected 00:08:26.376 #05: Data Size: 4096 Metadata Size: 8 00:08:26.376 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.376 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.376 00:08:26.376 NVM Specific Namespace Data 00:08:26.376 =========================== 00:08:26.376 Logical Block Storage Tag Mask: 0 00:08:26.376 Protection Information Capabilities: 00:08:26.376 16b Guard Protection Information Storage Tag Support: No 00:08:26.376 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.376 Storage Tag Check Read Support: No 00:08:26.376 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.376 ===================================================== 00:08:26.376 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.376 ===================================================== 00:08:26.376 Controller Capabilities/Features 00:08:26.376 ================================ 00:08:26.376 Vendor ID: 1b36 00:08:26.376 Subsystem Vendor ID: 1af4 00:08:26.376 Serial Number: 12342 00:08:26.376 Model Number: QEMU NVMe Ctrl 00:08:26.376 Firmware Version: 8.0.0 00:08:26.376 Recommended Arb Burst: 6 00:08:26.376 IEEE OUI Identifier: 00 54 52 00:08:26.376 Multi-path I/O 00:08:26.376 May have multiple subsystem ports: No 00:08:26.376 May have multiple controllers: No 00:08:26.376 Associated with SR-IOV VF: No 00:08:26.376 Max Data Transfer Size: 524288 00:08:26.376 Max Number of Namespaces: 256 00:08:26.376 Max Number of I/O Queues: 64 00:08:26.376 NVMe Specification Version (VS): 1.4 00:08:26.376 NVMe Specification Version (Identify): 1.4 00:08:26.376 Maximum Queue Entries: 2048 00:08:26.376 Contiguous Queues Required: Yes 00:08:26.377 Arbitration Mechanisms Supported 00:08:26.377 Weighted Round Robin: Not Supported 00:08:26.377 Vendor Specific: Not Supported 00:08:26.377 Reset Timeout: 7500 ms 00:08:26.377 Doorbell Stride: 4 bytes 00:08:26.377 NVM Subsystem Reset: Not Supported 00:08:26.377 Command Sets Supported 00:08:26.377 NVM Command Set: Supported 00:08:26.377 Boot Partition: Not Supported 00:08:26.377 Memory Page Size Minimum: 4096 bytes 00:08:26.377 Memory Page Size Maximum: 65536 bytes 00:08:26.377 Persistent Memory Region: Not Supported 00:08:26.377 Optional Asynchronous Events Supported 00:08:26.377 Namespace Attribute Notices: Supported 00:08:26.377 Firmware Activation Notices: Not Supported 00:08:26.377 ANA Change Notices: Not Supported 00:08:26.377 PLE Aggregate Log Change Notices: Not Supported 00:08:26.377 LBA Status Info Alert Notices: Not Supported 00:08:26.377 EGE Aggregate Log Change Notices: Not Supported 00:08:26.377 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.377 Zone Descriptor Change Notices: Not Supported 00:08:26.377 Discovery Log Change Notices: Not Supported 00:08:26.377 Controller Attributes 00:08:26.377 128-bit Host Identifier: Not Supported 00:08:26.377 Non-Operational Permissive Mode: Not Supported 00:08:26.377 NVM Sets: Not Supported 00:08:26.377 Read Recovery Levels: Not Supported 00:08:26.377 Endurance Groups: Not Supported 00:08:26.377 Predictable Latency Mode: Not Supported 00:08:26.377 Traffic Based Keep ALive: Not Supported 00:08:26.377 Namespace Granularity: Not Supported 00:08:26.377 SQ Associations: Not Supported 00:08:26.377 UUID List: Not Supported 00:08:26.377 Multi-Domain Subsystem: Not Supported 00:08:26.377 Fixed Capacity Management: Not Supported 00:08:26.377 Variable Capacity Management: Not Supported 00:08:26.377 Delete Endurance Group: Not Supported 00:08:26.377 Delete NVM Set: Not Supported 00:08:26.377 Extended LBA Formats Supported: Supported 00:08:26.377 Flexible Data Placement Supported: Not Supported 00:08:26.377 00:08:26.377 Controller Memory Buffer Support 00:08:26.377 ================================ 00:08:26.377 Supported: No 00:08:26.377 00:08:26.377 Persistent Memory Region Support 00:08:26.377 ================================ 00:08:26.377 Supported: No 00:08:26.377 00:08:26.377 Admin Command Set Attributes 00:08:26.377 ============================ 00:08:26.377 Security Send/Receive: Not Supported 00:08:26.377 Format NVM: Supported 00:08:26.377 Firmware Activate/Download: Not Supported 00:08:26.377 Namespace Management: Supported 00:08:26.377 Device Self-Test: Not Supported 00:08:26.377 Directives: Supported 00:08:26.377 NVMe-MI: Not Supported 00:08:26.377 Virtualization Management: Not Supported 00:08:26.377 Doorbell Buffer Config: Supported 00:08:26.377 Get LBA Status Capability: Not Supported 00:08:26.377 Command & Feature Lockdown Capability: Not Supported 00:08:26.377 Abort Command Limit: 4 00:08:26.377 Async Event Request Limit: 4 00:08:26.377 Number of Firmware Slots: N/A 00:08:26.377 Firmware Slot 1 Read-Only: N/A 00:08:26.377 Firmware Activation Without Reset: N/A 00:08:26.377 Multiple Update Detection Support: N/A 00:08:26.377 Firmware Update Granularity: No Information Provided 00:08:26.377 Per-Namespace SMART Log: Yes 00:08:26.377 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.377 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:26.377 Command Effects Log Page: Supported 00:08:26.377 Get Log Page Extended Data: Supported 00:08:26.377 Telemetry Log Pages: Not Supported 00:08:26.377 Persistent Event Log Pages: Not Supported 00:08:26.377 Supported Log Pages Log Page: May Support 00:08:26.377 Commands Supported & Effects Log Page: Not Supported 00:08:26.377 Feature Identifiers & Effects Log Page:May Support 00:08:26.377 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.377 Data Area 4 for Telemetry Log: Not Supported 00:08:26.377 Error Log Page Entries Supported: 1 00:08:26.377 Keep Alive: Not Supported 00:08:26.377 00:08:26.377 NVM Command Set Attributes 00:08:26.377 ========================== 00:08:26.377 Submission Queue Entry Size 00:08:26.377 Max: 64 00:08:26.377 Min: 64 00:08:26.377 Completion Queue Entry Size 00:08:26.377 Max: 16 00:08:26.377 Min: 16 00:08:26.377 Number of Namespaces: 256 00:08:26.377 Compare Command: Supported 00:08:26.377 Write Uncorrectable Command: Not Supported 00:08:26.377 Dataset Management Command: Supported 00:08:26.377 Write Zeroes Command: Supported 00:08:26.377 Set Features Save Field: Supported 00:08:26.377 Reservations: Not Supported 00:08:26.377 Timestamp: Supported 00:08:26.377 Copy: Supported 00:08:26.377 Volatile Write Cache: Present 00:08:26.377 Atomic Write Unit (Normal): 1 00:08:26.377 Atomic Write Unit (PFail): 1 00:08:26.377 Atomic Compare & Write Unit: 1 00:08:26.377 Fused Compare & Write: Not Supported 00:08:26.377 Scatter-Gather List 00:08:26.377 SGL Command Set: Supported 00:08:26.377 SGL Keyed: Not Supported 00:08:26.377 SGL Bit Bucket Descriptor: Not Supported 00:08:26.377 SGL Metadata Pointer: Not Supported 00:08:26.377 Oversized SGL: Not Supported 00:08:26.377 SGL Metadata Address: Not Supported 00:08:26.377 SGL Offset: Not Supported 00:08:26.377 Transport SGL Data Block: Not Supported 00:08:26.377 Replay Protected Memory Block: Not Supported 00:08:26.377 00:08:26.377 Firmware Slot Information 00:08:26.377 ========================= 00:08:26.377 Active slot: 1 00:08:26.377 Slot 1 Firmware Revision: 1.0 00:08:26.377 00:08:26.377 00:08:26.377 Commands Supported and Effects 00:08:26.377 ============================== 00:08:26.377 Admin Commands 00:08:26.377 -------------- 00:08:26.377 Delete I/O Submission Queue (00h): Supported 00:08:26.377 Create I/O Submission Queue (01h): Supported 00:08:26.377 Get Log Page (02h): Supported 00:08:26.377 Delete I/O Completion Queue (04h): Supported 00:08:26.377 Create I/O Completion Queue (05h): Supported 00:08:26.377 Identify (06h): Supported 00:08:26.377 Abort (08h): Supported 00:08:26.377 Set Features (09h): Supported 00:08:26.377 Get Features (0Ah): Supported 00:08:26.377 Asynchronous Event Request (0Ch): Supported 00:08:26.377 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.377 Directive Send (19h): Supported 00:08:26.377 Directive Receive (1Ah): Supported 00:08:26.377 Virtualization Management (1Ch): Supported 00:08:26.377 Doorbell Buffer Config (7Ch): Supported 00:08:26.377 Format NVM (80h): Supported LBA-Change 00:08:26.377 I/O Commands 00:08:26.377 ------------ 00:08:26.377 Flush (00h): Supported LBA-Change 00:08:26.377 Write (01h): Supported LBA-Change 00:08:26.377 Read (02h): Supported 00:08:26.377 Compare (05h): Supported 00:08:26.377 Write Zeroes (08h): Supported LBA-Change 00:08:26.377 Dataset Management (09h): Supported LBA-Change 00:08:26.377 Unknown (0Ch): Supported 00:08:26.377 Unknown (12h): Supported 00:08:26.377 Copy (19h): Supported LBA-Change 00:08:26.377 Unknown (1Dh): Supported LBA-Change 00:08:26.377 00:08:26.377 Error Log 00:08:26.377 ========= 00:08:26.377 00:08:26.377 Arbitration 00:08:26.377 =========== 00:08:26.377 Arbitration Burst: no limit 00:08:26.377 00:08:26.377 Power Management 00:08:26.377 ================ 00:08:26.377 Number of Power States: 1 00:08:26.377 Current Power State: Power State #0 00:08:26.377 Power State #0: 00:08:26.377 Max Power: 25.00 W 00:08:26.377 Non-Operational State: Operational 00:08:26.377 Entry Latency: 16 microseconds 00:08:26.377 Exit Latency: 4 microseconds 00:08:26.377 Relative Read Throughput: 0 00:08:26.377 Relative Read Latency: 0 00:08:26.377 Relative Write Throughput: 0 00:08:26.377 Relative Write Latency: 0 00:08:26.377 Idle Power: Not Reported 00:08:26.377 Active Power: Not Reported 00:08:26.377 Non-Operational Permissive Mode: Not Supported 00:08:26.377 00:08:26.377 Health Information 00:08:26.377 ================== 00:08:26.377 Critical Warnings: 00:08:26.377 Available Spare Space: OK 00:08:26.377 Temperature: OK 00:08:26.377 Device Reliability: OK 00:08:26.377 Read Only: No 00:08:26.377 Volatile Memory Backup: OK 00:08:26.377 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.377 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.377 Available Spare: 0% 00:08:26.377 Available Spare Threshold: 0% 00:08:26.377 Life Percentage Used: 0% 00:08:26.377 Data Units Read: 2210 00:08:26.378 Data Units Written: 1997 00:08:26.378 Host Read Commands: 111466 00:08:26.378 Host Write Commands: 109735 00:08:26.378 Controller Busy Time: 0 minutes 00:08:26.378 Power Cycles: 0 00:08:26.378 Power On Hours: 0 hours 00:08:26.378 Unsafe Shutdowns: 0 00:08:26.378 Unrecoverable Media Errors: 0 00:08:26.378 Lifetime Error Log Entries: 0 00:08:26.378 Warning Temperature Time: 0 minutes 00:08:26.378 Critical Temperature Time: 0 minutes 00:08:26.378 00:08:26.378 Number of Queues 00:08:26.378 ================ 00:08:26.378 Number of I/O Submission Queues: 64 00:08:26.378 Number of I/O Completion Queues: 64 00:08:26.378 00:08:26.378 ZNS Specific Controller Data 00:08:26.378 ============================ 00:08:26.378 Zone Append Size Limit: 0 00:08:26.378 00:08:26.378 00:08:26.378 Active Namespaces 00:08:26.378 ================= 00:08:26.378 Namespace ID:1 00:08:26.378 Error Recovery Timeout: Unlimited 00:08:26.378 Command Set Identifier: NVM (00h) 00:08:26.378 Deallocate: Supported 00:08:26.378 Deallocated/Unwritten Error: Supported 00:08:26.378 Deallocated Read Value: All 0x00 00:08:26.378 Deallocate in Write Zeroes: Not Supported 00:08:26.378 Deallocated Guard Field: 0xFFFF 00:08:26.378 Flush: Supported 00:08:26.378 Reservation: Not Supported 00:08:26.378 Namespace Sharing Capabilities: Private 00:08:26.378 Size (in LBAs): 1048576 (4GiB) 00:08:26.378 Capacity (in LBAs): 1048576 (4GiB) 00:08:26.378 Utilization (in LBAs): 1048576 (4GiB) 00:08:26.378 Thin Provisioning: Not Supported 00:08:26.378 Per-NS Atomic Units: No 00:08:26.378 Maximum Single Source Range Length: 128 00:08:26.378 Maximum Copy Length: 128 00:08:26.378 Maximum Source Range Count: 128 00:08:26.378 NGUID/EUI64 Never Reused: No 00:08:26.378 Namespace Write Protected: No 00:08:26.378 Number of LBA Formats: 8 00:08:26.378 Current LBA Format: LBA Format #04 00:08:26.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.378 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.378 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.378 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.378 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.378 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.378 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.378 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.378 00:08:26.378 NVM Specific Namespace Data 00:08:26.378 =========================== 00:08:26.378 Logical Block Storage Tag Mask: 0 00:08:26.378 Protection Information Capabilities: 00:08:26.378 16b Guard Protection Information Storage Tag Support: No 00:08:26.378 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.378 Storage Tag Check Read Support: No 00:08:26.378 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Namespace ID:2 00:08:26.378 Error Recovery Timeout: Unlimited 00:08:26.378 Command Set Identifier: NVM (00h) 00:08:26.378 Deallocate: Supported 00:08:26.378 Deallocated/Unwritten Error: Supported 00:08:26.378 Deallocated Read Value: All 0x00 00:08:26.378 Deallocate in Write Zeroes: Not Supported 00:08:26.378 Deallocated Guard Field: 0xFFFF 00:08:26.378 Flush: Supported 00:08:26.378 Reservation: Not Supported 00:08:26.378 Namespace Sharing Capabilities: Private 00:08:26.378 Size (in LBAs): 1048576 (4GiB) 00:08:26.378 Capacity (in LBAs): 1048576 (4GiB) 00:08:26.378 Utilization (in LBAs): 1048576 (4GiB) 00:08:26.378 Thin Provisioning: Not Supported 00:08:26.378 Per-NS Atomic Units: No 00:08:26.378 Maximum Single Source Range Length: 128 00:08:26.378 Maximum Copy Length: 128 00:08:26.378 Maximum Source Range Count: 128 00:08:26.378 NGUID/EUI64 Never Reused: No 00:08:26.378 Namespace Write Protected: No 00:08:26.378 Number of LBA Formats: 8 00:08:26.378 Current LBA Format: LBA Format #04 00:08:26.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.378 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.378 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.378 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.378 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.378 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.378 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.378 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.378 00:08:26.378 NVM Specific Namespace Data 00:08:26.378 =========================== 00:08:26.378 Logical Block Storage Tag Mask: 0 00:08:26.378 Protection Information Capabilities: 00:08:26.378 16b Guard Protection Information Storage Tag Support: No 00:08:26.378 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.378 Storage Tag Check Read Support: No 00:08:26.378 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Namespace ID:3 00:08:26.378 Error Recovery Timeout: Unlimited 00:08:26.378 Command Set Identifier: NVM (00h) 00:08:26.378 Deallocate: Supported 00:08:26.378 Deallocated/Unwritten Error: Supported 00:08:26.378 Deallocated Read Value: All 0x00 00:08:26.378 Deallocate in Write Zeroes: Not Supported 00:08:26.378 Deallocated Guard Field: 0xFFFF 00:08:26.378 Flush: Supported 00:08:26.378 Reservation: Not Supported 00:08:26.378 Namespace Sharing Capabilities: Private 00:08:26.378 Size (in LBAs): 1048576 (4GiB) 00:08:26.378 Capacity (in LBAs): 1048576 (4GiB) 00:08:26.378 Utilization (in LBAs): 1048576 (4GiB) 00:08:26.378 Thin Provisioning: Not Supported 00:08:26.378 Per-NS Atomic Units: No 00:08:26.378 Maximum Single Source Range Length: 128 00:08:26.378 Maximum Copy Length: 128 00:08:26.378 Maximum Source Range Count: 128 00:08:26.378 NGUID/EUI64 Never Reused: No 00:08:26.378 Namespace Write Protected: No 00:08:26.378 Number of LBA Formats: 8 00:08:26.378 Current LBA Format: LBA Format #04 00:08:26.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.378 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.378 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.378 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.378 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.378 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.378 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.378 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.378 00:08:26.378 NVM Specific Namespace Data 00:08:26.378 =========================== 00:08:26.378 Logical Block Storage Tag Mask: 0 00:08:26.378 Protection Information Capabilities: 00:08:26.378 16b Guard Protection Information Storage Tag Support: No 00:08:26.378 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.378 Storage Tag Check Read Support: No 00:08:26.378 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.378 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:26.378 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:26.639 ===================================================== 00:08:26.639 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.639 ===================================================== 00:08:26.639 Controller Capabilities/Features 00:08:26.639 ================================ 00:08:26.639 Vendor ID: 1b36 00:08:26.639 Subsystem Vendor ID: 1af4 00:08:26.639 Serial Number: 12340 00:08:26.639 Model Number: QEMU NVMe Ctrl 00:08:26.639 Firmware Version: 8.0.0 00:08:26.639 Recommended Arb Burst: 6 00:08:26.639 IEEE OUI Identifier: 00 54 52 00:08:26.639 Multi-path I/O 00:08:26.639 May have multiple subsystem ports: No 00:08:26.639 May have multiple controllers: No 00:08:26.639 Associated with SR-IOV VF: No 00:08:26.639 Max Data Transfer Size: 524288 00:08:26.639 Max Number of Namespaces: 256 00:08:26.639 Max Number of I/O Queues: 64 00:08:26.639 NVMe Specification Version (VS): 1.4 00:08:26.639 NVMe Specification Version (Identify): 1.4 00:08:26.639 Maximum Queue Entries: 2048 00:08:26.639 Contiguous Queues Required: Yes 00:08:26.639 Arbitration Mechanisms Supported 00:08:26.639 Weighted Round Robin: Not Supported 00:08:26.639 Vendor Specific: Not Supported 00:08:26.639 Reset Timeout: 7500 ms 00:08:26.639 Doorbell Stride: 4 bytes 00:08:26.639 NVM Subsystem Reset: Not Supported 00:08:26.639 Command Sets Supported 00:08:26.639 NVM Command Set: Supported 00:08:26.639 Boot Partition: Not Supported 00:08:26.639 Memory Page Size Minimum: 4096 bytes 00:08:26.639 Memory Page Size Maximum: 65536 bytes 00:08:26.640 Persistent Memory Region: Not Supported 00:08:26.640 Optional Asynchronous Events Supported 00:08:26.640 Namespace Attribute Notices: Supported 00:08:26.640 Firmware Activation Notices: Not Supported 00:08:26.640 ANA Change Notices: Not Supported 00:08:26.640 PLE Aggregate Log Change Notices: Not Supported 00:08:26.640 LBA Status Info Alert Notices: Not Supported 00:08:26.640 EGE Aggregate Log Change Notices: Not Supported 00:08:26.640 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.640 Zone Descriptor Change Notices: Not Supported 00:08:26.640 Discovery Log Change Notices: Not Supported 00:08:26.640 Controller Attributes 00:08:26.640 128-bit Host Identifier: Not Supported 00:08:26.640 Non-Operational Permissive Mode: Not Supported 00:08:26.640 NVM Sets: Not Supported 00:08:26.640 Read Recovery Levels: Not Supported 00:08:26.640 Endurance Groups: Not Supported 00:08:26.640 Predictable Latency Mode: Not Supported 00:08:26.640 Traffic Based Keep ALive: Not Supported 00:08:26.640 Namespace Granularity: Not Supported 00:08:26.640 SQ Associations: Not Supported 00:08:26.640 UUID List: Not Supported 00:08:26.640 Multi-Domain Subsystem: Not Supported 00:08:26.640 Fixed Capacity Management: Not Supported 00:08:26.640 Variable Capacity Management: Not Supported 00:08:26.640 Delete Endurance Group: Not Supported 00:08:26.640 Delete NVM Set: Not Supported 00:08:26.640 Extended LBA Formats Supported: Supported 00:08:26.640 Flexible Data Placement Supported: Not Supported 00:08:26.640 00:08:26.640 Controller Memory Buffer Support 00:08:26.640 ================================ 00:08:26.640 Supported: No 00:08:26.640 00:08:26.640 Persistent Memory Region Support 00:08:26.640 ================================ 00:08:26.640 Supported: No 00:08:26.640 00:08:26.640 Admin Command Set Attributes 00:08:26.640 ============================ 00:08:26.640 Security Send/Receive: Not Supported 00:08:26.640 Format NVM: Supported 00:08:26.640 Firmware Activate/Download: Not Supported 00:08:26.640 Namespace Management: Supported 00:08:26.640 Device Self-Test: Not Supported 00:08:26.640 Directives: Supported 00:08:26.640 NVMe-MI: Not Supported 00:08:26.640 Virtualization Management: Not Supported 00:08:26.640 Doorbell Buffer Config: Supported 00:08:26.640 Get LBA Status Capability: Not Supported 00:08:26.640 Command & Feature Lockdown Capability: Not Supported 00:08:26.640 Abort Command Limit: 4 00:08:26.640 Async Event Request Limit: 4 00:08:26.640 Number of Firmware Slots: N/A 00:08:26.640 Firmware Slot 1 Read-Only: N/A 00:08:26.640 Firmware Activation Without Reset: N/A 00:08:26.640 Multiple Update Detection Support: N/A 00:08:26.640 Firmware Update Granularity: No Information Provided 00:08:26.640 Per-Namespace SMART Log: Yes 00:08:26.640 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.640 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:26.640 Command Effects Log Page: Supported 00:08:26.640 Get Log Page Extended Data: Supported 00:08:26.640 Telemetry Log Pages: Not Supported 00:08:26.640 Persistent Event Log Pages: Not Supported 00:08:26.640 Supported Log Pages Log Page: May Support 00:08:26.640 Commands Supported & Effects Log Page: Not Supported 00:08:26.640 Feature Identifiers & Effects Log Page:May Support 00:08:26.640 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.640 Data Area 4 for Telemetry Log: Not Supported 00:08:26.640 Error Log Page Entries Supported: 1 00:08:26.640 Keep Alive: Not Supported 00:08:26.640 00:08:26.640 NVM Command Set Attributes 00:08:26.640 ========================== 00:08:26.640 Submission Queue Entry Size 00:08:26.640 Max: 64 00:08:26.640 Min: 64 00:08:26.640 Completion Queue Entry Size 00:08:26.640 Max: 16 00:08:26.640 Min: 16 00:08:26.640 Number of Namespaces: 256 00:08:26.640 Compare Command: Supported 00:08:26.640 Write Uncorrectable Command: Not Supported 00:08:26.640 Dataset Management Command: Supported 00:08:26.640 Write Zeroes Command: Supported 00:08:26.640 Set Features Save Field: Supported 00:08:26.640 Reservations: Not Supported 00:08:26.640 Timestamp: Supported 00:08:26.640 Copy: Supported 00:08:26.640 Volatile Write Cache: Present 00:08:26.640 Atomic Write Unit (Normal): 1 00:08:26.640 Atomic Write Unit (PFail): 1 00:08:26.640 Atomic Compare & Write Unit: 1 00:08:26.640 Fused Compare & Write: Not Supported 00:08:26.640 Scatter-Gather List 00:08:26.640 SGL Command Set: Supported 00:08:26.640 SGL Keyed: Not Supported 00:08:26.640 SGL Bit Bucket Descriptor: Not Supported 00:08:26.640 SGL Metadata Pointer: Not Supported 00:08:26.640 Oversized SGL: Not Supported 00:08:26.640 SGL Metadata Address: Not Supported 00:08:26.640 SGL Offset: Not Supported 00:08:26.640 Transport SGL Data Block: Not Supported 00:08:26.640 Replay Protected Memory Block: Not Supported 00:08:26.640 00:08:26.640 Firmware Slot Information 00:08:26.640 ========================= 00:08:26.640 Active slot: 1 00:08:26.640 Slot 1 Firmware Revision: 1.0 00:08:26.640 00:08:26.640 00:08:26.640 Commands Supported and Effects 00:08:26.640 ============================== 00:08:26.640 Admin Commands 00:08:26.640 -------------- 00:08:26.640 Delete I/O Submission Queue (00h): Supported 00:08:26.640 Create I/O Submission Queue (01h): Supported 00:08:26.640 Get Log Page (02h): Supported 00:08:26.640 Delete I/O Completion Queue (04h): Supported 00:08:26.640 Create I/O Completion Queue (05h): Supported 00:08:26.640 Identify (06h): Supported 00:08:26.640 Abort (08h): Supported 00:08:26.640 Set Features (09h): Supported 00:08:26.640 Get Features (0Ah): Supported 00:08:26.640 Asynchronous Event Request (0Ch): Supported 00:08:26.640 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.640 Directive Send (19h): Supported 00:08:26.640 Directive Receive (1Ah): Supported 00:08:26.640 Virtualization Management (1Ch): Supported 00:08:26.640 Doorbell Buffer Config (7Ch): Supported 00:08:26.640 Format NVM (80h): Supported LBA-Change 00:08:26.640 I/O Commands 00:08:26.640 ------------ 00:08:26.640 Flush (00h): Supported LBA-Change 00:08:26.640 Write (01h): Supported LBA-Change 00:08:26.640 Read (02h): Supported 00:08:26.640 Compare (05h): Supported 00:08:26.640 Write Zeroes (08h): Supported LBA-Change 00:08:26.640 Dataset Management (09h): Supported LBA-Change 00:08:26.640 Unknown (0Ch): Supported 00:08:26.640 Unknown (12h): Supported 00:08:26.640 Copy (19h): Supported LBA-Change 00:08:26.640 Unknown (1Dh): Supported LBA-Change 00:08:26.640 00:08:26.640 Error Log 00:08:26.640 ========= 00:08:26.640 00:08:26.640 Arbitration 00:08:26.640 =========== 00:08:26.640 Arbitration Burst: no limit 00:08:26.640 00:08:26.640 Power Management 00:08:26.640 ================ 00:08:26.640 Number of Power States: 1 00:08:26.640 Current Power State: Power State #0 00:08:26.640 Power State #0: 00:08:26.640 Max Power: 25.00 W 00:08:26.640 Non-Operational State: Operational 00:08:26.640 Entry Latency: 16 microseconds 00:08:26.640 Exit Latency: 4 microseconds 00:08:26.640 Relative Read Throughput: 0 00:08:26.640 Relative Read Latency: 0 00:08:26.640 Relative Write Throughput: 0 00:08:26.640 Relative Write Latency: 0 00:08:26.640 Idle Power: Not Reported 00:08:26.640 Active Power: Not Reported 00:08:26.640 Non-Operational Permissive Mode: Not Supported 00:08:26.640 00:08:26.640 Health Information 00:08:26.640 ================== 00:08:26.640 Critical Warnings: 00:08:26.640 Available Spare Space: OK 00:08:26.640 Temperature: OK 00:08:26.640 Device Reliability: OK 00:08:26.640 Read Only: No 00:08:26.640 Volatile Memory Backup: OK 00:08:26.640 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.640 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.640 Available Spare: 0% 00:08:26.640 Available Spare Threshold: 0% 00:08:26.640 Life Percentage Used: 0% 00:08:26.640 Data Units Read: 675 00:08:26.640 Data Units Written: 604 00:08:26.640 Host Read Commands: 36436 00:08:26.640 Host Write Commands: 36222 00:08:26.640 Controller Busy Time: 0 minutes 00:08:26.640 Power Cycles: 0 00:08:26.640 Power On Hours: 0 hours 00:08:26.640 Unsafe Shutdowns: 0 00:08:26.640 Unrecoverable Media Errors: 0 00:08:26.640 Lifetime Error Log Entries: 0 00:08:26.640 Warning Temperature Time: 0 minutes 00:08:26.640 Critical Temperature Time: 0 minutes 00:08:26.640 00:08:26.640 Number of Queues 00:08:26.640 ================ 00:08:26.640 Number of I/O Submission Queues: 64 00:08:26.640 Number of I/O Completion Queues: 64 00:08:26.640 00:08:26.640 ZNS Specific Controller Data 00:08:26.640 ============================ 00:08:26.641 Zone Append Size Limit: 0 00:08:26.641 00:08:26.641 00:08:26.641 Active Namespaces 00:08:26.641 ================= 00:08:26.641 Namespace ID:1 00:08:26.641 Error Recovery Timeout: Unlimited 00:08:26.641 Command Set Identifier: NVM (00h) 00:08:26.641 Deallocate: Supported 00:08:26.641 Deallocated/Unwritten Error: Supported 00:08:26.641 Deallocated Read Value: All 0x00 00:08:26.641 Deallocate in Write Zeroes: Not Supported 00:08:26.641 Deallocated Guard Field: 0xFFFF 00:08:26.641 Flush: Supported 00:08:26.641 Reservation: Not Supported 00:08:26.641 Metadata Transferred as: Separate Metadata Buffer 00:08:26.641 Namespace Sharing Capabilities: Private 00:08:26.641 Size (in LBAs): 1548666 (5GiB) 00:08:26.641 Capacity (in LBAs): 1548666 (5GiB) 00:08:26.641 Utilization (in LBAs): 1548666 (5GiB) 00:08:26.641 Thin Provisioning: Not Supported 00:08:26.641 Per-NS Atomic Units: No 00:08:26.641 Maximum Single Source Range Length: 128 00:08:26.641 Maximum Copy Length: 128 00:08:26.641 Maximum Source Range Count: 128 00:08:26.641 NGUID/EUI64 Never Reused: No 00:08:26.641 Namespace Write Protected: No 00:08:26.641 Number of LBA Formats: 8 00:08:26.641 Current LBA Format: LBA Format #07 00:08:26.641 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.641 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.641 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.641 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.641 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.641 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.641 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.641 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.641 00:08:26.641 NVM Specific Namespace Data 00:08:26.641 =========================== 00:08:26.641 Logical Block Storage Tag Mask: 0 00:08:26.641 Protection Information Capabilities: 00:08:26.641 16b Guard Protection Information Storage Tag Support: No 00:08:26.641 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.641 Storage Tag Check Read Support: No 00:08:26.641 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.641 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:26.641 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:26.903 ===================================================== 00:08:26.903 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.903 ===================================================== 00:08:26.903 Controller Capabilities/Features 00:08:26.903 ================================ 00:08:26.903 Vendor ID: 1b36 00:08:26.903 Subsystem Vendor ID: 1af4 00:08:26.903 Serial Number: 12341 00:08:26.903 Model Number: QEMU NVMe Ctrl 00:08:26.903 Firmware Version: 8.0.0 00:08:26.903 Recommended Arb Burst: 6 00:08:26.903 IEEE OUI Identifier: 00 54 52 00:08:26.903 Multi-path I/O 00:08:26.903 May have multiple subsystem ports: No 00:08:26.903 May have multiple controllers: No 00:08:26.903 Associated with SR-IOV VF: No 00:08:26.903 Max Data Transfer Size: 524288 00:08:26.903 Max Number of Namespaces: 256 00:08:26.903 Max Number of I/O Queues: 64 00:08:26.903 NVMe Specification Version (VS): 1.4 00:08:26.903 NVMe Specification Version (Identify): 1.4 00:08:26.903 Maximum Queue Entries: 2048 00:08:26.903 Contiguous Queues Required: Yes 00:08:26.903 Arbitration Mechanisms Supported 00:08:26.903 Weighted Round Robin: Not Supported 00:08:26.903 Vendor Specific: Not Supported 00:08:26.903 Reset Timeout: 7500 ms 00:08:26.903 Doorbell Stride: 4 bytes 00:08:26.903 NVM Subsystem Reset: Not Supported 00:08:26.903 Command Sets Supported 00:08:26.903 NVM Command Set: Supported 00:08:26.903 Boot Partition: Not Supported 00:08:26.903 Memory Page Size Minimum: 4096 bytes 00:08:26.903 Memory Page Size Maximum: 65536 bytes 00:08:26.903 Persistent Memory Region: Not Supported 00:08:26.904 Optional Asynchronous Events Supported 00:08:26.904 Namespace Attribute Notices: Supported 00:08:26.904 Firmware Activation Notices: Not Supported 00:08:26.904 ANA Change Notices: Not Supported 00:08:26.904 PLE Aggregate Log Change Notices: Not Supported 00:08:26.904 LBA Status Info Alert Notices: Not Supported 00:08:26.904 EGE Aggregate Log Change Notices: Not Supported 00:08:26.904 Normal NVM Subsystem Shutdown event: Not Supported 00:08:26.904 Zone Descriptor Change Notices: Not Supported 00:08:26.904 Discovery Log Change Notices: Not Supported 00:08:26.904 Controller Attributes 00:08:26.904 128-bit Host Identifier: Not Supported 00:08:26.904 Non-Operational Permissive Mode: Not Supported 00:08:26.904 NVM Sets: Not Supported 00:08:26.904 Read Recovery Levels: Not Supported 00:08:26.904 Endurance Groups: Not Supported 00:08:26.904 Predictable Latency Mode: Not Supported 00:08:26.904 Traffic Based Keep ALive: Not Supported 00:08:26.904 Namespace Granularity: Not Supported 00:08:26.904 SQ Associations: Not Supported 00:08:26.904 UUID List: Not Supported 00:08:26.904 Multi-Domain Subsystem: Not Supported 00:08:26.904 Fixed Capacity Management: Not Supported 00:08:26.904 Variable Capacity Management: Not Supported 00:08:26.904 Delete Endurance Group: Not Supported 00:08:26.904 Delete NVM Set: Not Supported 00:08:26.904 Extended LBA Formats Supported: Supported 00:08:26.904 Flexible Data Placement Supported: Not Supported 00:08:26.904 00:08:26.904 Controller Memory Buffer Support 00:08:26.904 ================================ 00:08:26.904 Supported: No 00:08:26.904 00:08:26.904 Persistent Memory Region Support 00:08:26.904 ================================ 00:08:26.904 Supported: No 00:08:26.904 00:08:26.904 Admin Command Set Attributes 00:08:26.904 ============================ 00:08:26.904 Security Send/Receive: Not Supported 00:08:26.904 Format NVM: Supported 00:08:26.904 Firmware Activate/Download: Not Supported 00:08:26.904 Namespace Management: Supported 00:08:26.904 Device Self-Test: Not Supported 00:08:26.904 Directives: Supported 00:08:26.904 NVMe-MI: Not Supported 00:08:26.904 Virtualization Management: Not Supported 00:08:26.904 Doorbell Buffer Config: Supported 00:08:26.904 Get LBA Status Capability: Not Supported 00:08:26.904 Command & Feature Lockdown Capability: Not Supported 00:08:26.904 Abort Command Limit: 4 00:08:26.904 Async Event Request Limit: 4 00:08:26.904 Number of Firmware Slots: N/A 00:08:26.904 Firmware Slot 1 Read-Only: N/A 00:08:26.904 Firmware Activation Without Reset: N/A 00:08:26.904 Multiple Update Detection Support: N/A 00:08:26.904 Firmware Update Granularity: No Information Provided 00:08:26.904 Per-Namespace SMART Log: Yes 00:08:26.904 Asymmetric Namespace Access Log Page: Not Supported 00:08:26.904 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:26.904 Command Effects Log Page: Supported 00:08:26.904 Get Log Page Extended Data: Supported 00:08:26.904 Telemetry Log Pages: Not Supported 00:08:26.904 Persistent Event Log Pages: Not Supported 00:08:26.904 Supported Log Pages Log Page: May Support 00:08:26.904 Commands Supported & Effects Log Page: Not Supported 00:08:26.904 Feature Identifiers & Effects Log Page:May Support 00:08:26.904 NVMe-MI Commands & Effects Log Page: May Support 00:08:26.904 Data Area 4 for Telemetry Log: Not Supported 00:08:26.904 Error Log Page Entries Supported: 1 00:08:26.904 Keep Alive: Not Supported 00:08:26.904 00:08:26.904 NVM Command Set Attributes 00:08:26.904 ========================== 00:08:26.904 Submission Queue Entry Size 00:08:26.904 Max: 64 00:08:26.904 Min: 64 00:08:26.904 Completion Queue Entry Size 00:08:26.904 Max: 16 00:08:26.904 Min: 16 00:08:26.904 Number of Namespaces: 256 00:08:26.904 Compare Command: Supported 00:08:26.904 Write Uncorrectable Command: Not Supported 00:08:26.904 Dataset Management Command: Supported 00:08:26.904 Write Zeroes Command: Supported 00:08:26.904 Set Features Save Field: Supported 00:08:26.904 Reservations: Not Supported 00:08:26.904 Timestamp: Supported 00:08:26.904 Copy: Supported 00:08:26.904 Volatile Write Cache: Present 00:08:26.904 Atomic Write Unit (Normal): 1 00:08:26.904 Atomic Write Unit (PFail): 1 00:08:26.904 Atomic Compare & Write Unit: 1 00:08:26.904 Fused Compare & Write: Not Supported 00:08:26.904 Scatter-Gather List 00:08:26.904 SGL Command Set: Supported 00:08:26.904 SGL Keyed: Not Supported 00:08:26.904 SGL Bit Bucket Descriptor: Not Supported 00:08:26.904 SGL Metadata Pointer: Not Supported 00:08:26.904 Oversized SGL: Not Supported 00:08:26.904 SGL Metadata Address: Not Supported 00:08:26.904 SGL Offset: Not Supported 00:08:26.904 Transport SGL Data Block: Not Supported 00:08:26.904 Replay Protected Memory Block: Not Supported 00:08:26.904 00:08:26.904 Firmware Slot Information 00:08:26.904 ========================= 00:08:26.904 Active slot: 1 00:08:26.904 Slot 1 Firmware Revision: 1.0 00:08:26.904 00:08:26.904 00:08:26.904 Commands Supported and Effects 00:08:26.904 ============================== 00:08:26.904 Admin Commands 00:08:26.904 -------------- 00:08:26.904 Delete I/O Submission Queue (00h): Supported 00:08:26.904 Create I/O Submission Queue (01h): Supported 00:08:26.904 Get Log Page (02h): Supported 00:08:26.904 Delete I/O Completion Queue (04h): Supported 00:08:26.904 Create I/O Completion Queue (05h): Supported 00:08:26.904 Identify (06h): Supported 00:08:26.904 Abort (08h): Supported 00:08:26.904 Set Features (09h): Supported 00:08:26.904 Get Features (0Ah): Supported 00:08:26.904 Asynchronous Event Request (0Ch): Supported 00:08:26.904 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:26.904 Directive Send (19h): Supported 00:08:26.904 Directive Receive (1Ah): Supported 00:08:26.904 Virtualization Management (1Ch): Supported 00:08:26.904 Doorbell Buffer Config (7Ch): Supported 00:08:26.904 Format NVM (80h): Supported LBA-Change 00:08:26.904 I/O Commands 00:08:26.904 ------------ 00:08:26.904 Flush (00h): Supported LBA-Change 00:08:26.904 Write (01h): Supported LBA-Change 00:08:26.904 Read (02h): Supported 00:08:26.904 Compare (05h): Supported 00:08:26.904 Write Zeroes (08h): Supported LBA-Change 00:08:26.904 Dataset Management (09h): Supported LBA-Change 00:08:26.904 Unknown (0Ch): Supported 00:08:26.904 Unknown (12h): Supported 00:08:26.904 Copy (19h): Supported LBA-Change 00:08:26.904 Unknown (1Dh): Supported LBA-Change 00:08:26.904 00:08:26.904 Error Log 00:08:26.904 ========= 00:08:26.904 00:08:26.904 Arbitration 00:08:26.904 =========== 00:08:26.904 Arbitration Burst: no limit 00:08:26.904 00:08:26.904 Power Management 00:08:26.904 ================ 00:08:26.904 Number of Power States: 1 00:08:26.904 Current Power State: Power State #0 00:08:26.904 Power State #0: 00:08:26.904 Max Power: 25.00 W 00:08:26.904 Non-Operational State: Operational 00:08:26.904 Entry Latency: 16 microseconds 00:08:26.904 Exit Latency: 4 microseconds 00:08:26.904 Relative Read Throughput: 0 00:08:26.904 Relative Read Latency: 0 00:08:26.904 Relative Write Throughput: 0 00:08:26.904 Relative Write Latency: 0 00:08:26.904 Idle Power: Not Reported 00:08:26.904 Active Power: Not Reported 00:08:26.904 Non-Operational Permissive Mode: Not Supported 00:08:26.904 00:08:26.904 Health Information 00:08:26.904 ================== 00:08:26.904 Critical Warnings: 00:08:26.904 Available Spare Space: OK 00:08:26.904 Temperature: OK 00:08:26.904 Device Reliability: OK 00:08:26.904 Read Only: No 00:08:26.904 Volatile Memory Backup: OK 00:08:26.904 Current Temperature: 323 Kelvin (50 Celsius) 00:08:26.904 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:26.904 Available Spare: 0% 00:08:26.904 Available Spare Threshold: 0% 00:08:26.904 Life Percentage Used: 0% 00:08:26.904 Data Units Read: 991 00:08:26.904 Data Units Written: 858 00:08:26.904 Host Read Commands: 53751 00:08:26.904 Host Write Commands: 52552 00:08:26.904 Controller Busy Time: 0 minutes 00:08:26.904 Power Cycles: 0 00:08:26.904 Power On Hours: 0 hours 00:08:26.904 Unsafe Shutdowns: 0 00:08:26.904 Unrecoverable Media Errors: 0 00:08:26.904 Lifetime Error Log Entries: 0 00:08:26.904 Warning Temperature Time: 0 minutes 00:08:26.904 Critical Temperature Time: 0 minutes 00:08:26.904 00:08:26.904 Number of Queues 00:08:26.904 ================ 00:08:26.904 Number of I/O Submission Queues: 64 00:08:26.904 Number of I/O Completion Queues: 64 00:08:26.904 00:08:26.904 ZNS Specific Controller Data 00:08:26.904 ============================ 00:08:26.904 Zone Append Size Limit: 0 00:08:26.905 00:08:26.905 00:08:26.905 Active Namespaces 00:08:26.905 ================= 00:08:26.905 Namespace ID:1 00:08:26.905 Error Recovery Timeout: Unlimited 00:08:26.905 Command Set Identifier: NVM (00h) 00:08:26.905 Deallocate: Supported 00:08:26.905 Deallocated/Unwritten Error: Supported 00:08:26.905 Deallocated Read Value: All 0x00 00:08:26.905 Deallocate in Write Zeroes: Not Supported 00:08:26.905 Deallocated Guard Field: 0xFFFF 00:08:26.905 Flush: Supported 00:08:26.905 Reservation: Not Supported 00:08:26.905 Namespace Sharing Capabilities: Private 00:08:26.905 Size (in LBAs): 1310720 (5GiB) 00:08:26.905 Capacity (in LBAs): 1310720 (5GiB) 00:08:26.905 Utilization (in LBAs): 1310720 (5GiB) 00:08:26.905 Thin Provisioning: Not Supported 00:08:26.905 Per-NS Atomic Units: No 00:08:26.905 Maximum Single Source Range Length: 128 00:08:26.905 Maximum Copy Length: 128 00:08:26.905 Maximum Source Range Count: 128 00:08:26.905 NGUID/EUI64 Never Reused: No 00:08:26.905 Namespace Write Protected: No 00:08:26.905 Number of LBA Formats: 8 00:08:26.905 Current LBA Format: LBA Format #04 00:08:26.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:26.905 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:26.905 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:26.905 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:26.905 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:26.905 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:26.905 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:26.905 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:26.905 00:08:26.905 NVM Specific Namespace Data 00:08:26.905 =========================== 00:08:26.905 Logical Block Storage Tag Mask: 0 00:08:26.905 Protection Information Capabilities: 00:08:26.905 16b Guard Protection Information Storage Tag Support: No 00:08:26.905 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:26.905 Storage Tag Check Read Support: No 00:08:26.905 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:26.905 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:26.905 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:27.167 ===================================================== 00:08:27.167 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.167 ===================================================== 00:08:27.167 Controller Capabilities/Features 00:08:27.167 ================================ 00:08:27.167 Vendor ID: 1b36 00:08:27.167 Subsystem Vendor ID: 1af4 00:08:27.167 Serial Number: 12342 00:08:27.167 Model Number: QEMU NVMe Ctrl 00:08:27.167 Firmware Version: 8.0.0 00:08:27.167 Recommended Arb Burst: 6 00:08:27.167 IEEE OUI Identifier: 00 54 52 00:08:27.167 Multi-path I/O 00:08:27.167 May have multiple subsystem ports: No 00:08:27.167 May have multiple controllers: No 00:08:27.167 Associated with SR-IOV VF: No 00:08:27.167 Max Data Transfer Size: 524288 00:08:27.167 Max Number of Namespaces: 256 00:08:27.167 Max Number of I/O Queues: 64 00:08:27.167 NVMe Specification Version (VS): 1.4 00:08:27.167 NVMe Specification Version (Identify): 1.4 00:08:27.167 Maximum Queue Entries: 2048 00:08:27.167 Contiguous Queues Required: Yes 00:08:27.167 Arbitration Mechanisms Supported 00:08:27.167 Weighted Round Robin: Not Supported 00:08:27.167 Vendor Specific: Not Supported 00:08:27.167 Reset Timeout: 7500 ms 00:08:27.167 Doorbell Stride: 4 bytes 00:08:27.167 NVM Subsystem Reset: Not Supported 00:08:27.167 Command Sets Supported 00:08:27.167 NVM Command Set: Supported 00:08:27.167 Boot Partition: Not Supported 00:08:27.167 Memory Page Size Minimum: 4096 bytes 00:08:27.167 Memory Page Size Maximum: 65536 bytes 00:08:27.167 Persistent Memory Region: Not Supported 00:08:27.167 Optional Asynchronous Events Supported 00:08:27.167 Namespace Attribute Notices: Supported 00:08:27.167 Firmware Activation Notices: Not Supported 00:08:27.167 ANA Change Notices: Not Supported 00:08:27.167 PLE Aggregate Log Change Notices: Not Supported 00:08:27.167 LBA Status Info Alert Notices: Not Supported 00:08:27.168 EGE Aggregate Log Change Notices: Not Supported 00:08:27.168 Normal NVM Subsystem Shutdown event: Not Supported 00:08:27.168 Zone Descriptor Change Notices: Not Supported 00:08:27.168 Discovery Log Change Notices: Not Supported 00:08:27.168 Controller Attributes 00:08:27.168 128-bit Host Identifier: Not Supported 00:08:27.168 Non-Operational Permissive Mode: Not Supported 00:08:27.168 NVM Sets: Not Supported 00:08:27.168 Read Recovery Levels: Not Supported 00:08:27.168 Endurance Groups: Not Supported 00:08:27.168 Predictable Latency Mode: Not Supported 00:08:27.168 Traffic Based Keep ALive: Not Supported 00:08:27.168 Namespace Granularity: Not Supported 00:08:27.168 SQ Associations: Not Supported 00:08:27.168 UUID List: Not Supported 00:08:27.168 Multi-Domain Subsystem: Not Supported 00:08:27.168 Fixed Capacity Management: Not Supported 00:08:27.168 Variable Capacity Management: Not Supported 00:08:27.168 Delete Endurance Group: Not Supported 00:08:27.168 Delete NVM Set: Not Supported 00:08:27.168 Extended LBA Formats Supported: Supported 00:08:27.168 Flexible Data Placement Supported: Not Supported 00:08:27.168 00:08:27.168 Controller Memory Buffer Support 00:08:27.168 ================================ 00:08:27.168 Supported: No 00:08:27.168 00:08:27.168 Persistent Memory Region Support 00:08:27.168 ================================ 00:08:27.168 Supported: No 00:08:27.168 00:08:27.168 Admin Command Set Attributes 00:08:27.168 ============================ 00:08:27.168 Security Send/Receive: Not Supported 00:08:27.168 Format NVM: Supported 00:08:27.168 Firmware Activate/Download: Not Supported 00:08:27.168 Namespace Management: Supported 00:08:27.168 Device Self-Test: Not Supported 00:08:27.168 Directives: Supported 00:08:27.168 NVMe-MI: Not Supported 00:08:27.168 Virtualization Management: Not Supported 00:08:27.168 Doorbell Buffer Config: Supported 00:08:27.168 Get LBA Status Capability: Not Supported 00:08:27.168 Command & Feature Lockdown Capability: Not Supported 00:08:27.168 Abort Command Limit: 4 00:08:27.168 Async Event Request Limit: 4 00:08:27.168 Number of Firmware Slots: N/A 00:08:27.168 Firmware Slot 1 Read-Only: N/A 00:08:27.168 Firmware Activation Without Reset: N/A 00:08:27.168 Multiple Update Detection Support: N/A 00:08:27.168 Firmware Update Granularity: No Information Provided 00:08:27.168 Per-Namespace SMART Log: Yes 00:08:27.168 Asymmetric Namespace Access Log Page: Not Supported 00:08:27.168 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:27.168 Command Effects Log Page: Supported 00:08:27.168 Get Log Page Extended Data: Supported 00:08:27.168 Telemetry Log Pages: Not Supported 00:08:27.168 Persistent Event Log Pages: Not Supported 00:08:27.168 Supported Log Pages Log Page: May Support 00:08:27.168 Commands Supported & Effects Log Page: Not Supported 00:08:27.168 Feature Identifiers & Effects Log Page:May Support 00:08:27.168 NVMe-MI Commands & Effects Log Page: May Support 00:08:27.168 Data Area 4 for Telemetry Log: Not Supported 00:08:27.168 Error Log Page Entries Supported: 1 00:08:27.168 Keep Alive: Not Supported 00:08:27.168 00:08:27.168 NVM Command Set Attributes 00:08:27.168 ========================== 00:08:27.168 Submission Queue Entry Size 00:08:27.168 Max: 64 00:08:27.168 Min: 64 00:08:27.168 Completion Queue Entry Size 00:08:27.168 Max: 16 00:08:27.168 Min: 16 00:08:27.168 Number of Namespaces: 256 00:08:27.168 Compare Command: Supported 00:08:27.168 Write Uncorrectable Command: Not Supported 00:08:27.168 Dataset Management Command: Supported 00:08:27.168 Write Zeroes Command: Supported 00:08:27.168 Set Features Save Field: Supported 00:08:27.168 Reservations: Not Supported 00:08:27.168 Timestamp: Supported 00:08:27.168 Copy: Supported 00:08:27.168 Volatile Write Cache: Present 00:08:27.168 Atomic Write Unit (Normal): 1 00:08:27.168 Atomic Write Unit (PFail): 1 00:08:27.168 Atomic Compare & Write Unit: 1 00:08:27.168 Fused Compare & Write: Not Supported 00:08:27.168 Scatter-Gather List 00:08:27.168 SGL Command Set: Supported 00:08:27.168 SGL Keyed: Not Supported 00:08:27.168 SGL Bit Bucket Descriptor: Not Supported 00:08:27.168 SGL Metadata Pointer: Not Supported 00:08:27.168 Oversized SGL: Not Supported 00:08:27.168 SGL Metadata Address: Not Supported 00:08:27.168 SGL Offset: Not Supported 00:08:27.168 Transport SGL Data Block: Not Supported 00:08:27.168 Replay Protected Memory Block: Not Supported 00:08:27.168 00:08:27.168 Firmware Slot Information 00:08:27.168 ========================= 00:08:27.168 Active slot: 1 00:08:27.168 Slot 1 Firmware Revision: 1.0 00:08:27.168 00:08:27.168 00:08:27.168 Commands Supported and Effects 00:08:27.168 ============================== 00:08:27.168 Admin Commands 00:08:27.168 -------------- 00:08:27.168 Delete I/O Submission Queue (00h): Supported 00:08:27.168 Create I/O Submission Queue (01h): Supported 00:08:27.168 Get Log Page (02h): Supported 00:08:27.168 Delete I/O Completion Queue (04h): Supported 00:08:27.168 Create I/O Completion Queue (05h): Supported 00:08:27.168 Identify (06h): Supported 00:08:27.168 Abort (08h): Supported 00:08:27.168 Set Features (09h): Supported 00:08:27.168 Get Features (0Ah): Supported 00:08:27.168 Asynchronous Event Request (0Ch): Supported 00:08:27.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:27.168 Directive Send (19h): Supported 00:08:27.168 Directive Receive (1Ah): Supported 00:08:27.168 Virtualization Management (1Ch): Supported 00:08:27.168 Doorbell Buffer Config (7Ch): Supported 00:08:27.168 Format NVM (80h): Supported LBA-Change 00:08:27.168 I/O Commands 00:08:27.168 ------------ 00:08:27.168 Flush (00h): Supported LBA-Change 00:08:27.168 Write (01h): Supported LBA-Change 00:08:27.168 Read (02h): Supported 00:08:27.168 Compare (05h): Supported 00:08:27.168 Write Zeroes (08h): Supported LBA-Change 00:08:27.168 Dataset Management (09h): Supported LBA-Change 00:08:27.168 Unknown (0Ch): Supported 00:08:27.168 Unknown (12h): Supported 00:08:27.168 Copy (19h): Supported LBA-Change 00:08:27.168 Unknown (1Dh): Supported LBA-Change 00:08:27.168 00:08:27.168 Error Log 00:08:27.168 ========= 00:08:27.168 00:08:27.168 Arbitration 00:08:27.168 =========== 00:08:27.168 Arbitration Burst: no limit 00:08:27.168 00:08:27.168 Power Management 00:08:27.168 ================ 00:08:27.168 Number of Power States: 1 00:08:27.168 Current Power State: Power State #0 00:08:27.168 Power State #0: 00:08:27.168 Max Power: 25.00 W 00:08:27.168 Non-Operational State: Operational 00:08:27.168 Entry Latency: 16 microseconds 00:08:27.168 Exit Latency: 4 microseconds 00:08:27.168 Relative Read Throughput: 0 00:08:27.168 Relative Read Latency: 0 00:08:27.168 Relative Write Throughput: 0 00:08:27.168 Relative Write Latency: 0 00:08:27.168 Idle Power: Not Reported 00:08:27.168 Active Power: Not Reported 00:08:27.168 Non-Operational Permissive Mode: Not Supported 00:08:27.168 00:08:27.168 Health Information 00:08:27.168 ================== 00:08:27.168 Critical Warnings: 00:08:27.168 Available Spare Space: OK 00:08:27.168 Temperature: OK 00:08:27.168 Device Reliability: OK 00:08:27.168 Read Only: No 00:08:27.168 Volatile Memory Backup: OK 00:08:27.168 Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.168 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:27.168 Available Spare: 0% 00:08:27.168 Available Spare Threshold: 0% 00:08:27.168 Life Percentage Used: 0% 00:08:27.168 Data Units Read: 2210 00:08:27.168 Data Units Written: 1997 00:08:27.168 Host Read Commands: 111466 00:08:27.168 Host Write Commands: 109735 00:08:27.168 Controller Busy Time: 0 minutes 00:08:27.168 Power Cycles: 0 00:08:27.168 Power On Hours: 0 hours 00:08:27.168 Unsafe Shutdowns: 0 00:08:27.168 Unrecoverable Media Errors: 0 00:08:27.168 Lifetime Error Log Entries: 0 00:08:27.168 Warning Temperature Time: 0 minutes 00:08:27.168 Critical Temperature Time: 0 minutes 00:08:27.168 00:08:27.168 Number of Queues 00:08:27.168 ================ 00:08:27.168 Number of I/O Submission Queues: 64 00:08:27.168 Number of I/O Completion Queues: 64 00:08:27.168 00:08:27.168 ZNS Specific Controller Data 00:08:27.168 ============================ 00:08:27.168 Zone Append Size Limit: 0 00:08:27.168 00:08:27.168 00:08:27.168 Active Namespaces 00:08:27.168 ================= 00:08:27.168 Namespace ID:1 00:08:27.168 Error Recovery Timeout: Unlimited 00:08:27.168 Command Set Identifier: NVM (00h) 00:08:27.168 Deallocate: Supported 00:08:27.168 Deallocated/Unwritten Error: Supported 00:08:27.168 Deallocated Read Value: All 0x00 00:08:27.168 Deallocate in Write Zeroes: Not Supported 00:08:27.168 Deallocated Guard Field: 0xFFFF 00:08:27.169 Flush: Supported 00:08:27.169 Reservation: Not Supported 00:08:27.169 Namespace Sharing Capabilities: Private 00:08:27.169 Size (in LBAs): 1048576 (4GiB) 00:08:27.169 Capacity (in LBAs): 1048576 (4GiB) 00:08:27.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:27.169 Thin Provisioning: Not Supported 00:08:27.169 Per-NS Atomic Units: No 00:08:27.169 Maximum Single Source Range Length: 128 00:08:27.169 Maximum Copy Length: 128 00:08:27.169 Maximum Source Range Count: 128 00:08:27.169 NGUID/EUI64 Never Reused: No 00:08:27.169 Namespace Write Protected: No 00:08:27.169 Number of LBA Formats: 8 00:08:27.169 Current LBA Format: LBA Format #04 00:08:27.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:27.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:27.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:27.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:27.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:27.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:27.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:27.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:27.169 00:08:27.169 NVM Specific Namespace Data 00:08:27.169 =========================== 00:08:27.169 Logical Block Storage Tag Mask: 0 00:08:27.169 Protection Information Capabilities: 00:08:27.169 16b Guard Protection Information Storage Tag Support: No 00:08:27.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:27.169 Storage Tag Check Read Support: No 00:08:27.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Namespace ID:2 00:08:27.169 Error Recovery Timeout: Unlimited 00:08:27.169 Command Set Identifier: NVM (00h) 00:08:27.169 Deallocate: Supported 00:08:27.169 Deallocated/Unwritten Error: Supported 00:08:27.169 Deallocated Read Value: All 0x00 00:08:27.169 Deallocate in Write Zeroes: Not Supported 00:08:27.169 Deallocated Guard Field: 0xFFFF 00:08:27.169 Flush: Supported 00:08:27.169 Reservation: Not Supported 00:08:27.169 Namespace Sharing Capabilities: Private 00:08:27.169 Size (in LBAs): 1048576 (4GiB) 00:08:27.169 Capacity (in LBAs): 1048576 (4GiB) 00:08:27.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:27.169 Thin Provisioning: Not Supported 00:08:27.169 Per-NS Atomic Units: No 00:08:27.169 Maximum Single Source Range Length: 128 00:08:27.169 Maximum Copy Length: 128 00:08:27.169 Maximum Source Range Count: 128 00:08:27.169 NGUID/EUI64 Never Reused: No 00:08:27.169 Namespace Write Protected: No 00:08:27.169 Number of LBA Formats: 8 00:08:27.169 Current LBA Format: LBA Format #04 00:08:27.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:27.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:27.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:27.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:27.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:27.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:27.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:27.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:27.169 00:08:27.169 NVM Specific Namespace Data 00:08:27.169 =========================== 00:08:27.169 Logical Block Storage Tag Mask: 0 00:08:27.169 Protection Information Capabilities: 00:08:27.169 16b Guard Protection Information Storage Tag Support: No 00:08:27.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:27.169 Storage Tag Check Read Support: No 00:08:27.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Namespace ID:3 00:08:27.169 Error Recovery Timeout: Unlimited 00:08:27.169 Command Set Identifier: NVM (00h) 00:08:27.169 Deallocate: Supported 00:08:27.169 Deallocated/Unwritten Error: Supported 00:08:27.169 Deallocated Read Value: All 0x00 00:08:27.169 Deallocate in Write Zeroes: Not Supported 00:08:27.169 Deallocated Guard Field: 0xFFFF 00:08:27.169 Flush: Supported 00:08:27.169 Reservation: Not Supported 00:08:27.169 Namespace Sharing Capabilities: Private 00:08:27.169 Size (in LBAs): 1048576 (4GiB) 00:08:27.169 Capacity (in LBAs): 1048576 (4GiB) 00:08:27.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:27.169 Thin Provisioning: Not Supported 00:08:27.169 Per-NS Atomic Units: No 00:08:27.169 Maximum Single Source Range Length: 128 00:08:27.169 Maximum Copy Length: 128 00:08:27.169 Maximum Source Range Count: 128 00:08:27.169 NGUID/EUI64 Never Reused: No 00:08:27.169 Namespace Write Protected: No 00:08:27.169 Number of LBA Formats: 8 00:08:27.169 Current LBA Format: LBA Format #04 00:08:27.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:27.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:27.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:27.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:27.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:27.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:27.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:27.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:27.169 00:08:27.169 NVM Specific Namespace Data 00:08:27.169 =========================== 00:08:27.169 Logical Block Storage Tag Mask: 0 00:08:27.169 Protection Information Capabilities: 00:08:27.169 16b Guard Protection Information Storage Tag Support: No 00:08:27.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:27.169 Storage Tag Check Read Support: No 00:08:27.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.169 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:27.169 00:31:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:27.169 ===================================================== 00:08:27.169 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.169 ===================================================== 00:08:27.169 Controller Capabilities/Features 00:08:27.169 ================================ 00:08:27.169 Vendor ID: 1b36 00:08:27.169 Subsystem Vendor ID: 1af4 00:08:27.169 Serial Number: 12343 00:08:27.169 Model Number: QEMU NVMe Ctrl 00:08:27.169 Firmware Version: 8.0.0 00:08:27.169 Recommended Arb Burst: 6 00:08:27.169 IEEE OUI Identifier: 00 54 52 00:08:27.169 Multi-path I/O 00:08:27.169 May have multiple subsystem ports: No 00:08:27.169 May have multiple controllers: Yes 00:08:27.169 Associated with SR-IOV VF: No 00:08:27.169 Max Data Transfer Size: 524288 00:08:27.169 Max Number of Namespaces: 256 00:08:27.169 Max Number of I/O Queues: 64 00:08:27.169 NVMe Specification Version (VS): 1.4 00:08:27.169 NVMe Specification Version (Identify): 1.4 00:08:27.169 Maximum Queue Entries: 2048 00:08:27.169 Contiguous Queues Required: Yes 00:08:27.169 Arbitration Mechanisms Supported 00:08:27.169 Weighted Round Robin: Not Supported 00:08:27.169 Vendor Specific: Not Supported 00:08:27.169 Reset Timeout: 7500 ms 00:08:27.169 Doorbell Stride: 4 bytes 00:08:27.169 NVM Subsystem Reset: Not Supported 00:08:27.169 Command Sets Supported 00:08:27.169 NVM Command Set: Supported 00:08:27.169 Boot Partition: Not Supported 00:08:27.169 Memory Page Size Minimum: 4096 bytes 00:08:27.169 Memory Page Size Maximum: 65536 bytes 00:08:27.169 Persistent Memory Region: Not Supported 00:08:27.169 Optional Asynchronous Events Supported 00:08:27.170 Namespace Attribute Notices: Supported 00:08:27.170 Firmware Activation Notices: Not Supported 00:08:27.170 ANA Change Notices: Not Supported 00:08:27.170 PLE Aggregate Log Change Notices: Not Supported 00:08:27.170 LBA Status Info Alert Notices: Not Supported 00:08:27.170 EGE Aggregate Log Change Notices: Not Supported 00:08:27.170 Normal NVM Subsystem Shutdown event: Not Supported 00:08:27.170 Zone Descriptor Change Notices: Not Supported 00:08:27.170 Discovery Log Change Notices: Not Supported 00:08:27.170 Controller Attributes 00:08:27.170 128-bit Host Identifier: Not Supported 00:08:27.170 Non-Operational Permissive Mode: Not Supported 00:08:27.170 NVM Sets: Not Supported 00:08:27.170 Read Recovery Levels: Not Supported 00:08:27.170 Endurance Groups: Supported 00:08:27.170 Predictable Latency Mode: Not Supported 00:08:27.170 Traffic Based Keep ALive: Not Supported 00:08:27.170 Namespace Granularity: Not Supported 00:08:27.170 SQ Associations: Not Supported 00:08:27.170 UUID List: Not Supported 00:08:27.170 Multi-Domain Subsystem: Not Supported 00:08:27.170 Fixed Capacity Management: Not Supported 00:08:27.170 Variable Capacity Management: Not Supported 00:08:27.170 Delete Endurance Group: Not Supported 00:08:27.170 Delete NVM Set: Not Supported 00:08:27.170 Extended LBA Formats Supported: Supported 00:08:27.170 Flexible Data Placement Supported: Supported 00:08:27.170 00:08:27.170 Controller Memory Buffer Support 00:08:27.170 ================================ 00:08:27.170 Supported: No 00:08:27.170 00:08:27.170 Persistent Memory Region Support 00:08:27.170 ================================ 00:08:27.170 Supported: No 00:08:27.170 00:08:27.170 Admin Command Set Attributes 00:08:27.170 ============================ 00:08:27.170 Security Send/Receive: Not Supported 00:08:27.170 Format NVM: Supported 00:08:27.170 Firmware Activate/Download: Not Supported 00:08:27.170 Namespace Management: Supported 00:08:27.170 Device Self-Test: Not Supported 00:08:27.170 Directives: Supported 00:08:27.170 NVMe-MI: Not Supported 00:08:27.170 Virtualization Management: Not Supported 00:08:27.170 Doorbell Buffer Config: Supported 00:08:27.170 Get LBA Status Capability: Not Supported 00:08:27.170 Command & Feature Lockdown Capability: Not Supported 00:08:27.170 Abort Command Limit: 4 00:08:27.170 Async Event Request Limit: 4 00:08:27.170 Number of Firmware Slots: N/A 00:08:27.170 Firmware Slot 1 Read-Only: N/A 00:08:27.170 Firmware Activation Without Reset: N/A 00:08:27.170 Multiple Update Detection Support: N/A 00:08:27.170 Firmware Update Granularity: No Information Provided 00:08:27.170 Per-Namespace SMART Log: Yes 00:08:27.170 Asymmetric Namespace Access Log Page: Not Supported 00:08:27.170 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:27.170 Command Effects Log Page: Supported 00:08:27.170 Get Log Page Extended Data: Supported 00:08:27.170 Telemetry Log Pages: Not Supported 00:08:27.170 Persistent Event Log Pages: Not Supported 00:08:27.170 Supported Log Pages Log Page: May Support 00:08:27.170 Commands Supported & Effects Log Page: Not Supported 00:08:27.170 Feature Identifiers & Effects Log Page:May Support 00:08:27.170 NVMe-MI Commands & Effects Log Page: May Support 00:08:27.170 Data Area 4 for Telemetry Log: Not Supported 00:08:27.170 Error Log Page Entries Supported: 1 00:08:27.170 Keep Alive: Not Supported 00:08:27.170 00:08:27.170 NVM Command Set Attributes 00:08:27.170 ========================== 00:08:27.170 Submission Queue Entry Size 00:08:27.170 Max: 64 00:08:27.170 Min: 64 00:08:27.170 Completion Queue Entry Size 00:08:27.170 Max: 16 00:08:27.170 Min: 16 00:08:27.170 Number of Namespaces: 256 00:08:27.170 Compare Command: Supported 00:08:27.170 Write Uncorrectable Command: Not Supported 00:08:27.170 Dataset Management Command: Supported 00:08:27.170 Write Zeroes Command: Supported 00:08:27.170 Set Features Save Field: Supported 00:08:27.170 Reservations: Not Supported 00:08:27.170 Timestamp: Supported 00:08:27.170 Copy: Supported 00:08:27.170 Volatile Write Cache: Present 00:08:27.170 Atomic Write Unit (Normal): 1 00:08:27.170 Atomic Write Unit (PFail): 1 00:08:27.170 Atomic Compare & Write Unit: 1 00:08:27.170 Fused Compare & Write: Not Supported 00:08:27.170 Scatter-Gather List 00:08:27.170 SGL Command Set: Supported 00:08:27.170 SGL Keyed: Not Supported 00:08:27.170 SGL Bit Bucket Descriptor: Not Supported 00:08:27.170 SGL Metadata Pointer: Not Supported 00:08:27.170 Oversized SGL: Not Supported 00:08:27.170 SGL Metadata Address: Not Supported 00:08:27.170 SGL Offset: Not Supported 00:08:27.170 Transport SGL Data Block: Not Supported 00:08:27.170 Replay Protected Memory Block: Not Supported 00:08:27.170 00:08:27.170 Firmware Slot Information 00:08:27.170 ========================= 00:08:27.170 Active slot: 1 00:08:27.170 Slot 1 Firmware Revision: 1.0 00:08:27.170 00:08:27.170 00:08:27.170 Commands Supported and Effects 00:08:27.170 ============================== 00:08:27.170 Admin Commands 00:08:27.170 -------------- 00:08:27.170 Delete I/O Submission Queue (00h): Supported 00:08:27.170 Create I/O Submission Queue (01h): Supported 00:08:27.170 Get Log Page (02h): Supported 00:08:27.170 Delete I/O Completion Queue (04h): Supported 00:08:27.170 Create I/O Completion Queue (05h): Supported 00:08:27.170 Identify (06h): Supported 00:08:27.170 Abort (08h): Supported 00:08:27.170 Set Features (09h): Supported 00:08:27.170 Get Features (0Ah): Supported 00:08:27.170 Asynchronous Event Request (0Ch): Supported 00:08:27.170 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:27.170 Directive Send (19h): Supported 00:08:27.170 Directive Receive (1Ah): Supported 00:08:27.170 Virtualization Management (1Ch): Supported 00:08:27.170 Doorbell Buffer Config (7Ch): Supported 00:08:27.170 Format NVM (80h): Supported LBA-Change 00:08:27.170 I/O Commands 00:08:27.170 ------------ 00:08:27.170 Flush (00h): Supported LBA-Change 00:08:27.170 Write (01h): Supported LBA-Change 00:08:27.170 Read (02h): Supported 00:08:27.170 Compare (05h): Supported 00:08:27.170 Write Zeroes (08h): Supported LBA-Change 00:08:27.170 Dataset Management (09h): Supported LBA-Change 00:08:27.170 Unknown (0Ch): Supported 00:08:27.170 Unknown (12h): Supported 00:08:27.170 Copy (19h): Supported LBA-Change 00:08:27.170 Unknown (1Dh): Supported LBA-Change 00:08:27.170 00:08:27.170 Error Log 00:08:27.170 ========= 00:08:27.170 00:08:27.170 Arbitration 00:08:27.170 =========== 00:08:27.170 Arbitration Burst: no limit 00:08:27.170 00:08:27.170 Power Management 00:08:27.170 ================ 00:08:27.170 Number of Power States: 1 00:08:27.170 Current Power State: Power State #0 00:08:27.170 Power State #0: 00:08:27.170 Max Power: 25.00 W 00:08:27.170 Non-Operational State: Operational 00:08:27.170 Entry Latency: 16 microseconds 00:08:27.170 Exit Latency: 4 microseconds 00:08:27.170 Relative Read Throughput: 0 00:08:27.170 Relative Read Latency: 0 00:08:27.170 Relative Write Throughput: 0 00:08:27.170 Relative Write Latency: 0 00:08:27.170 Idle Power: Not Reported 00:08:27.170 Active Power: Not Reported 00:08:27.170 Non-Operational Permissive Mode: Not Supported 00:08:27.170 00:08:27.170 Health Information 00:08:27.170 ================== 00:08:27.170 Critical Warnings: 00:08:27.170 Available Spare Space: OK 00:08:27.170 Temperature: OK 00:08:27.170 Device Reliability: OK 00:08:27.170 Read Only: No 00:08:27.170 Volatile Memory Backup: OK 00:08:27.170 Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.170 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:27.170 Available Spare: 0% 00:08:27.170 Available Spare Threshold: 0% 00:08:27.170 Life Percentage Used: 0% 00:08:27.170 Data Units Read: 854 00:08:27.170 Data Units Written: 783 00:08:27.170 Host Read Commands: 38129 00:08:27.170 Host Write Commands: 37552 00:08:27.170 Controller Busy Time: 0 minutes 00:08:27.170 Power Cycles: 0 00:08:27.170 Power On Hours: 0 hours 00:08:27.170 Unsafe Shutdowns: 0 00:08:27.170 Unrecoverable Media Errors: 0 00:08:27.170 Lifetime Error Log Entries: 0 00:08:27.170 Warning Temperature Time: 0 minutes 00:08:27.170 Critical Temperature Time: 0 minutes 00:08:27.170 00:08:27.170 Number of Queues 00:08:27.170 ================ 00:08:27.170 Number of I/O Submission Queues: 64 00:08:27.170 Number of I/O Completion Queues: 64 00:08:27.170 00:08:27.170 ZNS Specific Controller Data 00:08:27.170 ============================ 00:08:27.170 Zone Append Size Limit: 0 00:08:27.170 00:08:27.170 00:08:27.170 Active Namespaces 00:08:27.170 ================= 00:08:27.171 Namespace ID:1 00:08:27.171 Error Recovery Timeout: Unlimited 00:08:27.171 Command Set Identifier: NVM (00h) 00:08:27.171 Deallocate: Supported 00:08:27.171 Deallocated/Unwritten Error: Supported 00:08:27.171 Deallocated Read Value: All 0x00 00:08:27.171 Deallocate in Write Zeroes: Not Supported 00:08:27.171 Deallocated Guard Field: 0xFFFF 00:08:27.171 Flush: Supported 00:08:27.171 Reservation: Not Supported 00:08:27.171 Namespace Sharing Capabilities: Multiple Controllers 00:08:27.171 Size (in LBAs): 262144 (1GiB) 00:08:27.171 Capacity (in LBAs): 262144 (1GiB) 00:08:27.171 Utilization (in LBAs): 262144 (1GiB) 00:08:27.171 Thin Provisioning: Not Supported 00:08:27.171 Per-NS Atomic Units: No 00:08:27.171 Maximum Single Source Range Length: 128 00:08:27.171 Maximum Copy Length: 128 00:08:27.171 Maximum Source Range Count: 128 00:08:27.171 NGUID/EUI64 Never Reused: No 00:08:27.171 Namespace Write Protected: No 00:08:27.171 Endurance group ID: 1 00:08:27.171 Number of LBA Formats: 8 00:08:27.171 Current LBA Format: LBA Format #04 00:08:27.171 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:27.171 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:27.171 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:27.171 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:27.171 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:27.171 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:27.171 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:27.171 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:27.171 00:08:27.171 Get Feature FDP: 00:08:27.171 ================ 00:08:27.171 Enabled: Yes 00:08:27.171 FDP configuration index: 0 00:08:27.171 00:08:27.171 FDP configurations log page 00:08:27.171 =========================== 00:08:27.171 Number of FDP configurations: 1 00:08:27.171 Version: 0 00:08:27.171 Size: 112 00:08:27.171 FDP Configuration Descriptor: 0 00:08:27.171 Descriptor Size: 96 00:08:27.171 Reclaim Group Identifier format: 2 00:08:27.171 FDP Volatile Write Cache: Not Present 00:08:27.171 FDP Configuration: Valid 00:08:27.171 Vendor Specific Size: 0 00:08:27.171 Number of Reclaim Groups: 2 00:08:27.171 Number of Recalim Unit Handles: 8 00:08:27.171 Max Placement Identifiers: 128 00:08:27.171 Number of Namespaces Suppprted: 256 00:08:27.171 Reclaim unit Nominal Size: 6000000 bytes 00:08:27.171 Estimated Reclaim Unit Time Limit: Not Reported 00:08:27.171 RUH Desc #000: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #001: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #002: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #003: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #004: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #005: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #006: RUH Type: Initially Isolated 00:08:27.171 RUH Desc #007: RUH Type: Initially Isolated 00:08:27.171 00:08:27.171 FDP reclaim unit handle usage log page 00:08:27.432 ====================================== 00:08:27.432 Number of Reclaim Unit Handles: 8 00:08:27.432 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:27.432 RUH Usage Desc #001: RUH Attributes: Unused 00:08:27.432 RUH Usage Desc #002: RUH Attributes: Unused 00:08:27.432 RUH Usage Desc #003: RUH Attributes: Unused 00:08:27.433 RUH Usage Desc #004: RUH Attributes: Unused 00:08:27.433 RUH Usage Desc #005: RUH Attributes: Unused 00:08:27.433 RUH Usage Desc #006: RUH Attributes: Unused 00:08:27.433 RUH Usage Desc #007: RUH Attributes: Unused 00:08:27.433 00:08:27.433 FDP statistics log page 00:08:27.433 ======================= 00:08:27.433 Host bytes with metadata written: 497393664 00:08:27.433 Media bytes with metadata written: 497446912 00:08:27.433 Media bytes erased: 0 00:08:27.433 00:08:27.433 FDP events log page 00:08:27.433 =================== 00:08:27.433 Number of FDP events: 0 00:08:27.433 00:08:27.433 NVM Specific Namespace Data 00:08:27.433 =========================== 00:08:27.433 Logical Block Storage Tag Mask: 0 00:08:27.433 Protection Information Capabilities: 00:08:27.433 16b Guard Protection Information Storage Tag Support: No 00:08:27.433 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:27.433 Storage Tag Check Read Support: No 00:08:27.433 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:27.433 00:08:27.433 real 0m1.185s 00:08:27.433 user 0m0.426s 00:08:27.433 sys 0m0.527s 00:08:27.433 00:31:03 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.433 ************************************ 00:08:27.433 END TEST nvme_identify 00:08:27.433 ************************************ 00:08:27.433 00:31:03 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:27.433 00:31:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:27.433 00:31:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.433 00:31:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.433 00:31:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.433 ************************************ 00:08:27.433 START TEST nvme_perf 00:08:27.433 ************************************ 00:08:27.433 00:31:04 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:27.433 00:31:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:28.819 Initializing NVMe Controllers 00:08:28.819 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:28.819 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:28.819 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:28.819 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:28.819 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:28.819 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:28.819 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:28.819 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:28.819 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:28.819 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:28.819 Initialization complete. Launching workers. 00:08:28.819 ======================================================== 00:08:28.819 Latency(us) 00:08:28.819 Device Information : IOPS MiB/s Average min max 00:08:28.819 PCIE (0000:00:13.0) NSID 1 from core 0: 7933.91 92.98 16145.86 12009.48 42129.27 00:08:28.819 PCIE (0000:00:10.0) NSID 1 from core 0: 7933.91 92.98 16128.50 11168.61 42712.42 00:08:28.819 PCIE (0000:00:11.0) NSID 1 from core 0: 7933.91 92.98 16106.80 10668.02 42225.05 00:08:28.819 PCIE (0000:00:12.0) NSID 1 from core 0: 7933.91 92.98 16083.09 7969.54 43450.91 00:08:28.819 PCIE (0000:00:12.0) NSID 2 from core 0: 7933.91 92.98 16058.71 6479.75 42771.87 00:08:28.819 PCIE (0000:00:12.0) NSID 3 from core 0: 7933.91 92.98 16034.75 5315.58 42719.32 00:08:28.819 ======================================================== 00:08:28.819 Total : 47603.48 557.85 16092.95 5315.58 43450.91 00:08:28.819 00:08:28.819 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:28.819 ================================================================================= 00:08:28.819 1.00000% : 13308.849us 00:08:28.819 10.00000% : 14317.095us 00:08:28.819 25.00000% : 14821.218us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16434.412us 00:08:28.820 90.00000% : 18450.905us 00:08:28.820 95.00000% : 21072.345us 00:08:28.820 98.00000% : 24802.855us 00:08:28.820 99.00000% : 28432.542us 00:08:28.820 99.50000% : 41136.443us 00:08:28.820 99.90000% : 41943.040us 00:08:28.820 99.99000% : 42144.689us 00:08:28.820 99.99900% : 42144.689us 00:08:28.820 99.99990% : 42144.689us 00:08:28.820 99.99999% : 42144.689us 00:08:28.820 00:08:28.820 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:28.820 ================================================================================= 00:08:28.820 1.00000% : 13208.025us 00:08:28.820 10.00000% : 14317.095us 00:08:28.820 25.00000% : 14821.218us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16333.588us 00:08:28.820 90.00000% : 18148.431us 00:08:28.820 95.00000% : 20870.695us 00:08:28.820 98.00000% : 23895.434us 00:08:28.820 99.00000% : 31255.631us 00:08:28.820 99.50000% : 41539.742us 00:08:28.820 99.90000% : 42547.988us 00:08:28.820 99.99000% : 42749.637us 00:08:28.820 99.99900% : 42749.637us 00:08:28.820 99.99990% : 42749.637us 00:08:28.820 99.99999% : 42749.637us 00:08:28.820 00:08:28.820 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:28.820 ================================================================================= 00:08:28.820 1.00000% : 12754.314us 00:08:28.820 10.00000% : 14317.095us 00:08:28.820 25.00000% : 14821.218us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16434.412us 00:08:28.820 90.00000% : 18249.255us 00:08:28.820 95.00000% : 20669.046us 00:08:28.820 98.00000% : 23693.785us 00:08:28.820 99.00000% : 31053.982us 00:08:28.820 99.50000% : 41338.092us 00:08:28.820 99.90000% : 42144.689us 00:08:28.820 99.99000% : 42346.338us 00:08:28.820 99.99900% : 42346.338us 00:08:28.820 99.99990% : 42346.338us 00:08:28.820 99.99999% : 42346.338us 00:08:28.820 00:08:28.820 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:28.820 ================================================================================= 00:08:28.820 1.00000% : 12905.551us 00:08:28.820 10.00000% : 14317.095us 00:08:28.820 25.00000% : 14821.218us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16434.412us 00:08:28.820 90.00000% : 18551.729us 00:08:28.820 95.00000% : 20366.572us 00:08:28.820 98.00000% : 23290.486us 00:08:28.820 99.00000% : 31255.631us 00:08:28.820 99.50000% : 42547.988us 00:08:28.820 99.90000% : 43354.585us 00:08:28.820 99.99000% : 43556.234us 00:08:28.820 99.99900% : 43556.234us 00:08:28.820 99.99990% : 43556.234us 00:08:28.820 99.99999% : 43556.234us 00:08:28.820 00:08:28.820 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:28.820 ================================================================================= 00:08:28.820 1.00000% : 13107.200us 00:08:28.820 10.00000% : 14216.271us 00:08:28.820 25.00000% : 14821.218us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16434.412us 00:08:28.820 90.00000% : 18047.606us 00:08:28.820 95.00000% : 20366.572us 00:08:28.820 98.00000% : 23492.135us 00:08:28.820 99.00000% : 30449.034us 00:08:28.820 99.50000% : 41943.040us 00:08:28.820 99.90000% : 42749.637us 00:08:28.820 99.99000% : 42951.286us 00:08:28.820 99.99900% : 42951.286us 00:08:28.820 99.99990% : 42951.286us 00:08:28.820 99.99999% : 42951.286us 00:08:28.820 00:08:28.820 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:28.820 ================================================================================= 00:08:28.820 1.00000% : 13006.375us 00:08:28.820 10.00000% : 14216.271us 00:08:28.820 25.00000% : 14720.394us 00:08:28.820 50.00000% : 15426.166us 00:08:28.820 75.00000% : 16535.237us 00:08:28.820 90.00000% : 17946.782us 00:08:28.820 95.00000% : 20164.923us 00:08:28.820 98.00000% : 23492.135us 00:08:28.820 99.00000% : 29844.086us 00:08:28.820 99.50000% : 41943.040us 00:08:28.820 99.90000% : 42749.637us 00:08:28.820 99.99000% : 42749.637us 00:08:28.820 99.99900% : 42749.637us 00:08:28.820 99.99990% : 42749.637us 00:08:28.820 99.99999% : 42749.637us 00:08:28.820 00:08:28.820 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:28.820 ============================================================================== 00:08:28.820 Range in us Cumulative IO count 00:08:28.820 11998.129 - 12048.542: 0.1260% ( 10) 00:08:28.820 12048.542 - 12098.954: 0.1638% ( 3) 00:08:28.820 12098.954 - 12149.366: 0.2016% ( 3) 00:08:28.820 12149.366 - 12199.778: 0.2394% ( 3) 00:08:28.820 12199.778 - 12250.191: 0.2772% ( 3) 00:08:28.820 12250.191 - 12300.603: 0.3150% ( 3) 00:08:28.820 12300.603 - 12351.015: 0.3654% ( 4) 00:08:28.820 12351.015 - 12401.428: 0.4032% ( 3) 00:08:28.820 12401.428 - 12451.840: 0.4536% ( 4) 00:08:28.820 12451.840 - 12502.252: 0.4914% ( 3) 00:08:28.820 12502.252 - 12552.665: 0.5292% ( 3) 00:08:28.820 12552.665 - 12603.077: 0.5544% ( 2) 00:08:28.820 12603.077 - 12653.489: 0.6048% ( 4) 00:08:28.820 12653.489 - 12703.902: 0.6426% ( 3) 00:08:28.820 12703.902 - 12754.314: 0.6930% ( 4) 00:08:28.820 12754.314 - 12804.726: 0.7434% ( 4) 00:08:28.820 12804.726 - 12855.138: 0.7812% ( 3) 00:08:28.820 12855.138 - 12905.551: 0.8065% ( 2) 00:08:28.820 12905.551 - 13006.375: 0.8191% ( 1) 00:08:28.820 13006.375 - 13107.200: 0.8821% ( 5) 00:08:28.820 13107.200 - 13208.025: 0.9325% ( 4) 00:08:28.820 13208.025 - 13308.849: 1.0711% ( 11) 00:08:28.820 13308.849 - 13409.674: 1.2853% ( 17) 00:08:28.820 13409.674 - 13510.498: 1.6255% ( 27) 00:08:28.820 13510.498 - 13611.323: 2.0161% ( 31) 00:08:28.820 13611.323 - 13712.148: 2.6336% ( 49) 00:08:28.820 13712.148 - 13812.972: 3.6542% ( 81) 00:08:28.820 13812.972 - 13913.797: 4.5741% ( 73) 00:08:28.820 13913.797 - 14014.622: 5.9728% ( 111) 00:08:28.820 14014.622 - 14115.446: 7.7243% ( 139) 00:08:28.820 14115.446 - 14216.271: 9.7530% ( 161) 00:08:28.820 14216.271 - 14317.095: 12.0842% ( 185) 00:08:28.820 14317.095 - 14417.920: 14.8690% ( 221) 00:08:28.820 14417.920 - 14518.745: 18.1074% ( 257) 00:08:28.820 14518.745 - 14619.569: 21.4214% ( 263) 00:08:28.820 14619.569 - 14720.394: 24.8488% ( 272) 00:08:28.820 14720.394 - 14821.218: 28.4652% ( 287) 00:08:28.820 14821.218 - 14922.043: 32.1825% ( 295) 00:08:28.820 14922.043 - 15022.868: 36.0257% ( 305) 00:08:28.820 15022.868 - 15123.692: 39.8059% ( 300) 00:08:28.820 15123.692 - 15224.517: 43.3342% ( 280) 00:08:28.820 15224.517 - 15325.342: 47.0136% ( 292) 00:08:28.820 15325.342 - 15426.166: 50.6930% ( 292) 00:08:28.820 15426.166 - 15526.991: 54.1709% ( 276) 00:08:28.820 15526.991 - 15627.815: 57.3715% ( 254) 00:08:28.820 15627.815 - 15728.640: 60.1562% ( 221) 00:08:28.820 15728.640 - 15829.465: 62.8402% ( 213) 00:08:28.820 15829.465 - 15930.289: 65.2470% ( 191) 00:08:28.820 15930.289 - 16031.114: 67.6285% ( 189) 00:08:28.820 16031.114 - 16131.938: 69.9723% ( 186) 00:08:28.820 16131.938 - 16232.763: 72.1774% ( 175) 00:08:28.820 16232.763 - 16333.588: 74.0045% ( 145) 00:08:28.820 16333.588 - 16434.412: 75.5544% ( 123) 00:08:28.820 16434.412 - 16535.237: 76.8649% ( 104) 00:08:28.820 16535.237 - 16636.062: 78.0494% ( 94) 00:08:28.820 16636.062 - 16736.886: 79.2717% ( 97) 00:08:28.820 16736.886 - 16837.711: 80.4435% ( 93) 00:08:28.820 16837.711 - 16938.535: 81.4138% ( 77) 00:08:28.820 16938.535 - 17039.360: 82.2833% ( 69) 00:08:28.820 17039.360 - 17140.185: 83.2787% ( 79) 00:08:28.820 17140.185 - 17241.009: 84.0096% ( 58) 00:08:28.820 17241.009 - 17341.834: 84.7530% ( 59) 00:08:28.820 17341.834 - 17442.658: 85.4839% ( 58) 00:08:28.820 17442.658 - 17543.483: 86.2021% ( 57) 00:08:28.820 17543.483 - 17644.308: 87.0086% ( 64) 00:08:28.820 17644.308 - 17745.132: 87.6638% ( 52) 00:08:28.820 17745.132 - 17845.957: 88.2182% ( 44) 00:08:28.820 17845.957 - 17946.782: 88.6593% ( 35) 00:08:28.820 17946.782 - 18047.606: 88.9617% ( 24) 00:08:28.820 18047.606 - 18148.431: 89.2137% ( 20) 00:08:28.820 18148.431 - 18249.255: 89.4783% ( 21) 00:08:28.820 18249.255 - 18350.080: 89.7933% ( 25) 00:08:28.820 18350.080 - 18450.905: 90.1840% ( 31) 00:08:28.820 18450.905 - 18551.729: 90.5998% ( 33) 00:08:28.820 18551.729 - 18652.554: 91.0660% ( 37) 00:08:28.820 18652.554 - 18753.378: 91.4315% ( 29) 00:08:28.820 18753.378 - 18854.203: 91.7843% ( 28) 00:08:28.820 18854.203 - 18955.028: 92.0993% ( 25) 00:08:28.820 18955.028 - 19055.852: 92.4143% ( 25) 00:08:28.820 19055.852 - 19156.677: 92.7293% ( 25) 00:08:28.820 19156.677 - 19257.502: 93.0066% ( 22) 00:08:28.820 19257.502 - 19358.326: 93.2964% ( 23) 00:08:28.820 19358.326 - 19459.151: 93.5988% ( 24) 00:08:28.820 19459.151 - 19559.975: 93.8760% ( 22) 00:08:28.820 19559.975 - 19660.800: 94.0902% ( 17) 00:08:28.820 19660.800 - 19761.625: 94.2162% ( 10) 00:08:28.820 19761.625 - 19862.449: 94.3296% ( 9) 00:08:28.820 19862.449 - 19963.274: 94.3674% ( 3) 00:08:28.820 19963.274 - 20064.098: 94.4178% ( 4) 00:08:28.820 20064.098 - 20164.923: 94.4808% ( 5) 00:08:28.820 20164.923 - 20265.748: 94.5312% ( 4) 00:08:28.820 20265.748 - 20366.572: 94.5943% ( 5) 00:08:28.820 20366.572 - 20467.397: 94.6447% ( 4) 00:08:28.820 20467.397 - 20568.222: 94.7077% ( 5) 00:08:28.820 20568.222 - 20669.046: 94.7581% ( 4) 00:08:28.820 20669.046 - 20769.871: 94.8337% ( 6) 00:08:28.820 20769.871 - 20870.695: 94.8967% ( 5) 00:08:28.821 20870.695 - 20971.520: 94.9597% ( 5) 00:08:28.821 20971.520 - 21072.345: 95.0227% ( 5) 00:08:28.821 21072.345 - 21173.169: 95.1235% ( 8) 00:08:28.821 21173.169 - 21273.994: 95.2369% ( 9) 00:08:28.821 21273.994 - 21374.818: 95.2747% ( 3) 00:08:28.821 21374.818 - 21475.643: 95.3377% ( 5) 00:08:28.821 21475.643 - 21576.468: 95.4007% ( 5) 00:08:28.821 21576.468 - 21677.292: 95.5015% ( 8) 00:08:28.821 21677.292 - 21778.117: 95.6653% ( 13) 00:08:28.821 21778.117 - 21878.942: 95.7409% ( 6) 00:08:28.821 21878.942 - 21979.766: 95.8165% ( 6) 00:08:28.821 21979.766 - 22080.591: 95.9299% ( 9) 00:08:28.821 22080.591 - 22181.415: 96.0307% ( 8) 00:08:28.821 22181.415 - 22282.240: 96.1442% ( 9) 00:08:28.821 22282.240 - 22383.065: 96.2450% ( 8) 00:08:28.821 22383.065 - 22483.889: 96.3584% ( 9) 00:08:28.821 22483.889 - 22584.714: 96.4844% ( 10) 00:08:28.821 22584.714 - 22685.538: 96.6608% ( 14) 00:08:28.821 22685.538 - 22786.363: 96.8246% ( 13) 00:08:28.821 22786.363 - 22887.188: 96.9506% ( 10) 00:08:28.821 22887.188 - 22988.012: 97.0262% ( 6) 00:08:28.821 22988.012 - 23088.837: 97.0892% ( 5) 00:08:28.821 23088.837 - 23189.662: 97.1522% ( 5) 00:08:28.821 23189.662 - 23290.486: 97.2278% ( 6) 00:08:28.821 23290.486 - 23391.311: 97.2782% ( 4) 00:08:28.821 23391.311 - 23492.135: 97.3538% ( 6) 00:08:28.821 23492.135 - 23592.960: 97.4168% ( 5) 00:08:28.821 23592.960 - 23693.785: 97.4798% ( 5) 00:08:28.821 23693.785 - 23794.609: 97.5554% ( 6) 00:08:28.821 23794.609 - 23895.434: 97.6310% ( 6) 00:08:28.821 23895.434 - 23996.258: 97.6815% ( 4) 00:08:28.821 23996.258 - 24097.083: 97.7319% ( 4) 00:08:28.821 24097.083 - 24197.908: 97.7697% ( 3) 00:08:28.821 24197.908 - 24298.732: 97.8201% ( 4) 00:08:28.821 24298.732 - 24399.557: 97.8579% ( 3) 00:08:28.821 24399.557 - 24500.382: 97.9083% ( 4) 00:08:28.821 24500.382 - 24601.206: 97.9587% ( 4) 00:08:28.821 24601.206 - 24702.031: 97.9965% ( 3) 00:08:28.821 24702.031 - 24802.855: 98.0343% ( 3) 00:08:28.821 24802.855 - 24903.680: 98.0847% ( 4) 00:08:28.821 24903.680 - 25004.505: 98.1225% ( 3) 00:08:28.821 25004.505 - 25105.329: 98.1729% ( 4) 00:08:28.821 25105.329 - 25206.154: 98.2107% ( 3) 00:08:28.821 25206.154 - 25306.978: 98.2611% ( 4) 00:08:28.821 25306.978 - 25407.803: 98.3115% ( 4) 00:08:28.821 25407.803 - 25508.628: 98.3493% ( 3) 00:08:28.821 25508.628 - 25609.452: 98.3871% ( 3) 00:08:28.821 26617.698 - 26819.348: 98.3997% ( 1) 00:08:28.821 26819.348 - 27020.997: 98.4753% ( 6) 00:08:28.821 27020.997 - 27222.646: 98.5635% ( 7) 00:08:28.821 27222.646 - 27424.295: 98.6517% ( 7) 00:08:28.821 27424.295 - 27625.945: 98.7273% ( 6) 00:08:28.821 27625.945 - 27827.594: 98.8029% ( 6) 00:08:28.821 27827.594 - 28029.243: 98.8911% ( 7) 00:08:28.821 28029.243 - 28230.892: 98.9667% ( 6) 00:08:28.821 28230.892 - 28432.542: 99.0549% ( 7) 00:08:28.821 28432.542 - 28634.191: 99.1305% ( 6) 00:08:28.821 28634.191 - 28835.840: 99.1935% ( 5) 00:08:28.821 40329.846 - 40531.495: 99.2440% ( 4) 00:08:28.821 40531.495 - 40733.145: 99.3448% ( 8) 00:08:28.821 40733.145 - 40934.794: 99.4456% ( 8) 00:08:28.821 40934.794 - 41136.443: 99.5464% ( 8) 00:08:28.821 41136.443 - 41338.092: 99.6472% ( 8) 00:08:28.821 41338.092 - 41539.742: 99.7480% ( 8) 00:08:28.821 41539.742 - 41741.391: 99.8362% ( 7) 00:08:28.821 41741.391 - 41943.040: 99.9244% ( 7) 00:08:28.821 41943.040 - 42144.689: 100.0000% ( 6) 00:08:28.821 00:08:28.821 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:28.821 ============================================================================== 00:08:28.821 Range in us Cumulative IO count 00:08:28.821 11141.120 - 11191.532: 0.0252% ( 2) 00:08:28.821 11191.532 - 11241.945: 0.0882% ( 5) 00:08:28.821 11241.945 - 11292.357: 0.1512% ( 5) 00:08:28.821 11292.357 - 11342.769: 0.1638% ( 1) 00:08:28.821 11342.769 - 11393.182: 0.2016% ( 3) 00:08:28.821 11393.182 - 11443.594: 0.2394% ( 3) 00:08:28.821 11443.594 - 11494.006: 0.2772% ( 3) 00:08:28.821 11494.006 - 11544.418: 0.3150% ( 3) 00:08:28.821 11544.418 - 11594.831: 0.3528% ( 3) 00:08:28.821 11594.831 - 11645.243: 0.3906% ( 3) 00:08:28.821 11645.243 - 11695.655: 0.4284% ( 3) 00:08:28.821 11695.655 - 11746.068: 0.4410% ( 1) 00:08:28.821 11746.068 - 11796.480: 0.4914% ( 4) 00:08:28.821 11796.480 - 11846.892: 0.5292% ( 3) 00:08:28.821 11846.892 - 11897.305: 0.5544% ( 2) 00:08:28.821 11897.305 - 11947.717: 0.6174% ( 5) 00:08:28.821 11947.717 - 11998.129: 0.6300% ( 1) 00:08:28.821 11998.129 - 12048.542: 0.6552% ( 2) 00:08:28.821 12048.542 - 12098.954: 0.6804% ( 2) 00:08:28.821 12098.954 - 12149.366: 0.7308% ( 4) 00:08:28.821 12199.778 - 12250.191: 0.8065% ( 6) 00:08:28.821 12804.726 - 12855.138: 0.8443% ( 3) 00:08:28.821 12855.138 - 12905.551: 0.8569% ( 1) 00:08:28.821 12905.551 - 13006.375: 0.8947% ( 3) 00:08:28.821 13006.375 - 13107.200: 0.9451% ( 4) 00:08:28.821 13107.200 - 13208.025: 1.0963% ( 12) 00:08:28.821 13208.025 - 13308.849: 1.3861% ( 23) 00:08:28.821 13308.849 - 13409.674: 1.8649% ( 38) 00:08:28.821 13409.674 - 13510.498: 2.2807% ( 33) 00:08:28.821 13510.498 - 13611.323: 2.7722% ( 39) 00:08:28.821 13611.323 - 13712.148: 3.3896% ( 49) 00:08:28.821 13712.148 - 13812.972: 4.3851% ( 79) 00:08:28.821 13812.972 - 13913.797: 5.5066% ( 89) 00:08:28.821 13913.797 - 14014.622: 6.5902% ( 86) 00:08:28.821 14014.622 - 14115.446: 7.7747% ( 94) 00:08:28.821 14115.446 - 14216.271: 9.7026% ( 153) 00:08:28.821 14216.271 - 14317.095: 11.9582% ( 179) 00:08:28.821 14317.095 - 14417.920: 14.7807% ( 224) 00:08:28.821 14417.920 - 14518.745: 17.4269% ( 210) 00:08:28.821 14518.745 - 14619.569: 20.5771% ( 250) 00:08:28.821 14619.569 - 14720.394: 24.0045% ( 272) 00:08:28.821 14720.394 - 14821.218: 27.2177% ( 255) 00:08:28.821 14821.218 - 14922.043: 30.9980% ( 300) 00:08:28.821 14922.043 - 15022.868: 35.1562% ( 330) 00:08:28.821 15022.868 - 15123.692: 38.7475% ( 285) 00:08:28.821 15123.692 - 15224.517: 42.8427% ( 325) 00:08:28.821 15224.517 - 15325.342: 47.1396% ( 341) 00:08:28.821 15325.342 - 15426.166: 51.1593% ( 319) 00:08:28.821 15426.166 - 15526.991: 54.5111% ( 266) 00:08:28.821 15526.991 - 15627.815: 58.0267% ( 279) 00:08:28.821 15627.815 - 15728.640: 61.3281% ( 262) 00:08:28.821 15728.640 - 15829.465: 64.2137% ( 229) 00:08:28.821 15829.465 - 15930.289: 66.8221% ( 207) 00:08:28.821 15930.289 - 16031.114: 69.3296% ( 199) 00:08:28.821 16031.114 - 16131.938: 71.7112% ( 189) 00:08:28.821 16131.938 - 16232.763: 73.6139% ( 151) 00:08:28.821 16232.763 - 16333.588: 75.0504% ( 114) 00:08:28.821 16333.588 - 16434.412: 76.5625% ( 120) 00:08:28.821 16434.412 - 16535.237: 77.8730% ( 104) 00:08:28.821 16535.237 - 16636.062: 79.2591% ( 110) 00:08:28.821 16636.062 - 16736.886: 80.3679% ( 88) 00:08:28.821 16736.886 - 16837.711: 81.3004% ( 74) 00:08:28.821 16837.711 - 16938.535: 82.1321% ( 66) 00:08:28.821 16938.535 - 17039.360: 82.9511% ( 65) 00:08:28.821 17039.360 - 17140.185: 83.8836% ( 74) 00:08:28.821 17140.185 - 17241.009: 84.7278% ( 67) 00:08:28.821 17241.009 - 17341.834: 85.5469% ( 65) 00:08:28.821 17341.834 - 17442.658: 86.2903% ( 59) 00:08:28.821 17442.658 - 17543.483: 86.8574% ( 45) 00:08:28.821 17543.483 - 17644.308: 87.4748% ( 49) 00:08:28.821 17644.308 - 17745.132: 88.0796% ( 48) 00:08:28.821 17745.132 - 17845.957: 88.6845% ( 48) 00:08:28.821 17845.957 - 17946.782: 89.1507% ( 37) 00:08:28.821 17946.782 - 18047.606: 89.5917% ( 35) 00:08:28.821 18047.606 - 18148.431: 90.0328% ( 35) 00:08:28.821 18148.431 - 18249.255: 90.4990% ( 37) 00:08:28.821 18249.255 - 18350.080: 90.8266% ( 26) 00:08:28.821 18350.080 - 18450.905: 91.1416% ( 25) 00:08:28.821 18450.905 - 18551.729: 91.3936% ( 20) 00:08:28.821 18551.729 - 18652.554: 91.5953% ( 16) 00:08:28.821 18652.554 - 18753.378: 91.9355% ( 27) 00:08:28.821 18753.378 - 18854.203: 92.0615% ( 10) 00:08:28.821 18854.203 - 18955.028: 92.2127% ( 12) 00:08:28.821 18955.028 - 19055.852: 92.3639% ( 12) 00:08:28.821 19055.852 - 19156.677: 92.5277% ( 13) 00:08:28.821 19156.677 - 19257.502: 92.7167% ( 15) 00:08:28.821 19257.502 - 19358.326: 92.9183% ( 16) 00:08:28.821 19358.326 - 19459.151: 93.1326% ( 17) 00:08:28.821 19459.151 - 19559.975: 93.3342% ( 16) 00:08:28.821 19559.975 - 19660.800: 93.5232% ( 15) 00:08:28.821 19660.800 - 19761.625: 93.6492% ( 10) 00:08:28.821 19761.625 - 19862.449: 93.7878% ( 11) 00:08:28.821 19862.449 - 19963.274: 94.0776% ( 23) 00:08:28.821 19963.274 - 20064.098: 94.1406% ( 5) 00:08:28.821 20064.098 - 20164.923: 94.3044% ( 13) 00:08:28.821 20164.923 - 20265.748: 94.4304% ( 10) 00:08:28.821 20265.748 - 20366.572: 94.5186% ( 7) 00:08:28.821 20366.572 - 20467.397: 94.6321% ( 9) 00:08:28.821 20467.397 - 20568.222: 94.7329% ( 8) 00:08:28.821 20568.222 - 20669.046: 94.8589% ( 10) 00:08:28.821 20669.046 - 20769.871: 94.9471% ( 7) 00:08:28.821 20769.871 - 20870.695: 95.0227% ( 6) 00:08:28.821 20870.695 - 20971.520: 95.1109% ( 7) 00:08:28.821 20971.520 - 21072.345: 95.2369% ( 10) 00:08:28.821 21072.345 - 21173.169: 95.3377% ( 8) 00:08:28.821 21173.169 - 21273.994: 95.4763% ( 11) 00:08:28.821 21273.994 - 21374.818: 95.5645% ( 7) 00:08:28.821 21374.818 - 21475.643: 95.6779% ( 9) 00:08:28.821 21475.643 - 21576.468: 95.7409% ( 5) 00:08:28.821 21576.468 - 21677.292: 95.8039% ( 5) 00:08:28.821 21677.292 - 21778.117: 95.8795% ( 6) 00:08:28.822 21778.117 - 21878.942: 95.9677% ( 7) 00:08:28.822 21878.942 - 21979.766: 96.0685% ( 8) 00:08:28.822 21979.766 - 22080.591: 96.1442% ( 6) 00:08:28.822 22080.591 - 22181.415: 96.2324% ( 7) 00:08:28.822 22181.415 - 22282.240: 96.3332% ( 8) 00:08:28.822 22282.240 - 22383.065: 96.3962% ( 5) 00:08:28.822 22383.065 - 22483.889: 96.5474% ( 12) 00:08:28.822 22483.889 - 22584.714: 96.6230% ( 6) 00:08:28.822 22584.714 - 22685.538: 96.8246% ( 16) 00:08:28.822 22685.538 - 22786.363: 96.9128% ( 7) 00:08:28.822 22786.363 - 22887.188: 97.0136% ( 8) 00:08:28.822 22887.188 - 22988.012: 97.1522% ( 11) 00:08:28.822 22988.012 - 23088.837: 97.2530% ( 8) 00:08:28.822 23088.837 - 23189.662: 97.4168% ( 13) 00:08:28.822 23189.662 - 23290.486: 97.5050% ( 7) 00:08:28.822 23290.486 - 23391.311: 97.6310% ( 10) 00:08:28.822 23391.311 - 23492.135: 97.7319% ( 8) 00:08:28.822 23492.135 - 23592.960: 97.8327% ( 8) 00:08:28.822 23592.960 - 23693.785: 97.9083% ( 6) 00:08:28.822 23693.785 - 23794.609: 97.9965% ( 7) 00:08:28.822 23794.609 - 23895.434: 98.0595% ( 5) 00:08:28.822 23895.434 - 23996.258: 98.2107% ( 12) 00:08:28.822 23996.258 - 24097.083: 98.2485% ( 3) 00:08:28.822 24097.083 - 24197.908: 98.3367% ( 7) 00:08:28.822 24197.908 - 24298.732: 98.3745% ( 3) 00:08:28.822 24298.732 - 24399.557: 98.3871% ( 1) 00:08:28.822 29037.489 - 29239.138: 98.3997% ( 1) 00:08:28.822 29239.138 - 29440.788: 98.4375% ( 3) 00:08:28.822 29440.788 - 29642.437: 98.4879% ( 4) 00:08:28.822 29642.437 - 29844.086: 98.5635% ( 6) 00:08:28.822 29844.086 - 30045.735: 98.6517% ( 7) 00:08:28.822 30045.735 - 30247.385: 98.6895% ( 3) 00:08:28.822 30247.385 - 30449.034: 98.7777% ( 7) 00:08:28.822 30449.034 - 30650.683: 98.8407% ( 5) 00:08:28.822 30650.683 - 30852.332: 98.9289% ( 7) 00:08:28.822 30852.332 - 31053.982: 98.9919% ( 5) 00:08:28.822 31053.982 - 31255.631: 99.0549% ( 5) 00:08:28.822 31255.631 - 31457.280: 99.1809% ( 10) 00:08:28.822 31457.280 - 31658.929: 99.1935% ( 1) 00:08:28.822 40733.145 - 40934.794: 99.2566% ( 5) 00:08:28.822 40934.794 - 41136.443: 99.3322% ( 6) 00:08:28.822 41136.443 - 41338.092: 99.4204% ( 7) 00:08:28.822 41338.092 - 41539.742: 99.5086% ( 7) 00:08:28.822 41539.742 - 41741.391: 99.5968% ( 7) 00:08:28.822 41741.391 - 41943.040: 99.6724% ( 6) 00:08:28.822 41943.040 - 42144.689: 99.7606% ( 7) 00:08:28.822 42144.689 - 42346.338: 99.8362% ( 6) 00:08:28.822 42346.338 - 42547.988: 99.9244% ( 7) 00:08:28.822 42547.988 - 42749.637: 100.0000% ( 6) 00:08:28.822 00:08:28.822 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:28.822 ============================================================================== 00:08:28.822 Range in us Cumulative IO count 00:08:28.822 10636.997 - 10687.409: 0.0504% ( 4) 00:08:28.822 10687.409 - 10737.822: 0.1260% ( 6) 00:08:28.822 10737.822 - 10788.234: 0.1512% ( 2) 00:08:28.822 10788.234 - 10838.646: 0.1890% ( 3) 00:08:28.822 10838.646 - 10889.058: 0.2268% ( 3) 00:08:28.822 10889.058 - 10939.471: 0.2646% ( 3) 00:08:28.822 10939.471 - 10989.883: 0.3150% ( 4) 00:08:28.822 10989.883 - 11040.295: 0.3528% ( 3) 00:08:28.822 11040.295 - 11090.708: 0.4032% ( 4) 00:08:28.822 11090.708 - 11141.120: 0.4410% ( 3) 00:08:28.822 11141.120 - 11191.532: 0.4914% ( 4) 00:08:28.822 11191.532 - 11241.945: 0.5292% ( 3) 00:08:28.822 11241.945 - 11292.357: 0.5796% ( 4) 00:08:28.822 11292.357 - 11342.769: 0.6174% ( 3) 00:08:28.822 11342.769 - 11393.182: 0.6426% ( 2) 00:08:28.822 11393.182 - 11443.594: 0.6804% ( 3) 00:08:28.822 11443.594 - 11494.006: 0.7308% ( 4) 00:08:28.822 11494.006 - 11544.418: 0.7686% ( 3) 00:08:28.822 11544.418 - 11594.831: 0.8065% ( 3) 00:08:28.822 12401.428 - 12451.840: 0.8317% ( 2) 00:08:28.822 12451.840 - 12502.252: 0.8569% ( 2) 00:08:28.822 12502.252 - 12552.665: 0.8947% ( 3) 00:08:28.822 12552.665 - 12603.077: 0.9199% ( 2) 00:08:28.822 12603.077 - 12653.489: 0.9451% ( 2) 00:08:28.822 12653.489 - 12703.902: 0.9703% ( 2) 00:08:28.822 12703.902 - 12754.314: 1.0207% ( 4) 00:08:28.822 12754.314 - 12804.726: 1.0585% ( 3) 00:08:28.822 12804.726 - 12855.138: 1.1089% ( 4) 00:08:28.822 12855.138 - 12905.551: 1.1845% ( 6) 00:08:28.822 12905.551 - 13006.375: 1.2853% ( 8) 00:08:28.822 13006.375 - 13107.200: 1.3861% ( 8) 00:08:28.822 13107.200 - 13208.025: 1.5247% ( 11) 00:08:28.822 13208.025 - 13308.849: 1.6381% ( 9) 00:08:28.822 13308.849 - 13409.674: 1.8271% ( 15) 00:08:28.822 13409.674 - 13510.498: 2.0917% ( 21) 00:08:28.822 13510.498 - 13611.323: 2.5958% ( 40) 00:08:28.822 13611.323 - 13712.148: 3.1628% ( 45) 00:08:28.822 13712.148 - 13812.972: 3.8684% ( 56) 00:08:28.822 13812.972 - 13913.797: 4.7757% ( 72) 00:08:28.822 13913.797 - 14014.622: 5.8342% ( 84) 00:08:28.822 14014.622 - 14115.446: 7.0312% ( 95) 00:08:28.822 14115.446 - 14216.271: 8.6064% ( 125) 00:08:28.822 14216.271 - 14317.095: 10.7233% ( 168) 00:08:28.822 14317.095 - 14417.920: 13.5585% ( 225) 00:08:28.822 14417.920 - 14518.745: 16.6457% ( 245) 00:08:28.822 14518.745 - 14619.569: 20.1235% ( 276) 00:08:28.822 14619.569 - 14720.394: 23.6895% ( 283) 00:08:28.822 14720.394 - 14821.218: 27.8604% ( 331) 00:08:28.822 14821.218 - 14922.043: 31.9808% ( 327) 00:08:28.822 14922.043 - 15022.868: 36.0257% ( 321) 00:08:28.822 15022.868 - 15123.692: 40.1210% ( 325) 00:08:28.822 15123.692 - 15224.517: 44.3926% ( 339) 00:08:28.822 15224.517 - 15325.342: 48.2233% ( 304) 00:08:28.822 15325.342 - 15426.166: 52.0161% ( 301) 00:08:28.822 15426.166 - 15526.991: 55.4814% ( 275) 00:08:28.822 15526.991 - 15627.815: 58.8080% ( 264) 00:08:28.822 15627.815 - 15728.640: 62.0464% ( 257) 00:08:28.822 15728.640 - 15829.465: 64.7555% ( 215) 00:08:28.822 15829.465 - 15930.289: 67.1497% ( 190) 00:08:28.822 15930.289 - 16031.114: 69.4178% ( 180) 00:08:28.822 16031.114 - 16131.938: 71.5096% ( 166) 00:08:28.822 16131.938 - 16232.763: 73.2611% ( 139) 00:08:28.822 16232.763 - 16333.588: 74.8362% ( 125) 00:08:28.822 16333.588 - 16434.412: 76.1593% ( 105) 00:08:28.822 16434.412 - 16535.237: 77.5832% ( 113) 00:08:28.822 16535.237 - 16636.062: 78.9945% ( 112) 00:08:28.822 16636.062 - 16736.886: 80.4309% ( 114) 00:08:28.822 16736.886 - 16837.711: 81.8422% ( 112) 00:08:28.822 16837.711 - 16938.535: 82.8881% ( 83) 00:08:28.822 16938.535 - 17039.360: 83.9088% ( 81) 00:08:28.822 17039.360 - 17140.185: 84.8160% ( 72) 00:08:28.822 17140.185 - 17241.009: 85.5595% ( 59) 00:08:28.822 17241.009 - 17341.834: 86.2777% ( 57) 00:08:28.822 17341.834 - 17442.658: 86.9204% ( 51) 00:08:28.822 17442.658 - 17543.483: 87.5000% ( 46) 00:08:28.822 17543.483 - 17644.308: 88.0418% ( 43) 00:08:28.822 17644.308 - 17745.132: 88.4325% ( 31) 00:08:28.822 17745.132 - 17845.957: 88.8231% ( 31) 00:08:28.822 17845.957 - 17946.782: 89.1129% ( 23) 00:08:28.822 17946.782 - 18047.606: 89.4657% ( 28) 00:08:28.822 18047.606 - 18148.431: 89.7933% ( 26) 00:08:28.822 18148.431 - 18249.255: 90.0958% ( 24) 00:08:28.822 18249.255 - 18350.080: 90.3604% ( 21) 00:08:28.822 18350.080 - 18450.905: 90.6502% ( 23) 00:08:28.822 18450.905 - 18551.729: 90.9148% ( 21) 00:08:28.822 18551.729 - 18652.554: 91.1542% ( 19) 00:08:28.822 18652.554 - 18753.378: 91.4062% ( 20) 00:08:28.822 18753.378 - 18854.203: 91.6331% ( 18) 00:08:28.822 18854.203 - 18955.028: 91.8851% ( 20) 00:08:28.822 18955.028 - 19055.852: 92.1875% ( 24) 00:08:28.822 19055.852 - 19156.677: 92.4269% ( 19) 00:08:28.822 19156.677 - 19257.502: 92.6033% ( 14) 00:08:28.822 19257.502 - 19358.326: 92.7797% ( 14) 00:08:28.822 19358.326 - 19459.151: 92.9688% ( 15) 00:08:28.822 19459.151 - 19559.975: 93.1326% ( 13) 00:08:28.822 19559.975 - 19660.800: 93.2334% ( 8) 00:08:28.822 19660.800 - 19761.625: 93.3972% ( 13) 00:08:28.822 19761.625 - 19862.449: 93.6618% ( 21) 00:08:28.822 19862.449 - 19963.274: 93.8634% ( 16) 00:08:28.822 19963.274 - 20064.098: 94.0272% ( 13) 00:08:28.822 20064.098 - 20164.923: 94.2036% ( 14) 00:08:28.822 20164.923 - 20265.748: 94.3548% ( 12) 00:08:28.822 20265.748 - 20366.572: 94.5312% ( 14) 00:08:28.822 20366.572 - 20467.397: 94.7581% ( 18) 00:08:28.822 20467.397 - 20568.222: 94.9093% ( 12) 00:08:28.822 20568.222 - 20669.046: 95.0731% ( 13) 00:08:28.822 20669.046 - 20769.871: 95.2621% ( 15) 00:08:28.822 20769.871 - 20870.695: 95.3629% ( 8) 00:08:28.822 20870.695 - 20971.520: 95.4385% ( 6) 00:08:28.822 20971.520 - 21072.345: 95.5519% ( 9) 00:08:28.822 21072.345 - 21173.169: 95.6149% ( 5) 00:08:28.822 21173.169 - 21273.994: 95.7157% ( 8) 00:08:28.822 21273.994 - 21374.818: 95.8039% ( 7) 00:08:28.822 21374.818 - 21475.643: 95.9173% ( 9) 00:08:28.822 21475.643 - 21576.468: 96.0181% ( 8) 00:08:28.822 21576.468 - 21677.292: 96.1316% ( 9) 00:08:28.822 21677.292 - 21778.117: 96.2324% ( 8) 00:08:28.822 21778.117 - 21878.942: 96.2954% ( 5) 00:08:28.822 21878.942 - 21979.766: 96.3836% ( 7) 00:08:28.822 21979.766 - 22080.591: 96.5726% ( 15) 00:08:28.822 22080.591 - 22181.415: 96.6734% ( 8) 00:08:28.822 22181.415 - 22282.240: 96.7616% ( 7) 00:08:28.822 22282.240 - 22383.065: 96.8750% ( 9) 00:08:28.822 22383.065 - 22483.889: 96.9884% ( 9) 00:08:28.822 22483.889 - 22584.714: 97.0892% ( 8) 00:08:28.822 22584.714 - 22685.538: 97.1900% ( 8) 00:08:28.822 22685.538 - 22786.363: 97.2908% ( 8) 00:08:28.822 22786.363 - 22887.188: 97.3412% ( 4) 00:08:28.822 22887.188 - 22988.012: 97.4672% ( 10) 00:08:28.822 22988.012 - 23088.837: 97.5680% ( 8) 00:08:28.822 23088.837 - 23189.662: 97.6689% ( 8) 00:08:28.822 23189.662 - 23290.486: 97.7571% ( 7) 00:08:28.822 23290.486 - 23391.311: 97.8579% ( 8) 00:08:28.822 23391.311 - 23492.135: 97.9209% ( 5) 00:08:28.823 23492.135 - 23592.960: 97.9839% ( 5) 00:08:28.823 23592.960 - 23693.785: 98.0343% ( 4) 00:08:28.823 23693.785 - 23794.609: 98.0721% ( 3) 00:08:28.823 23794.609 - 23895.434: 98.1225% ( 4) 00:08:28.823 23895.434 - 23996.258: 98.1729% ( 4) 00:08:28.823 23996.258 - 24097.083: 98.2233% ( 4) 00:08:28.823 24097.083 - 24197.908: 98.2737% ( 4) 00:08:28.823 24197.908 - 24298.732: 98.3241% ( 4) 00:08:28.823 24298.732 - 24399.557: 98.3745% ( 4) 00:08:28.823 24399.557 - 24500.382: 98.3871% ( 1) 00:08:28.823 29440.788 - 29642.437: 98.4501% ( 5) 00:08:28.823 29642.437 - 29844.086: 98.5131% ( 5) 00:08:28.823 29844.086 - 30045.735: 98.5887% ( 6) 00:08:28.823 30045.735 - 30247.385: 98.6769% ( 7) 00:08:28.823 30247.385 - 30449.034: 98.7777% ( 8) 00:08:28.823 30449.034 - 30650.683: 98.8659% ( 7) 00:08:28.823 30650.683 - 30852.332: 98.9667% ( 8) 00:08:28.823 30852.332 - 31053.982: 99.0675% ( 8) 00:08:28.823 31053.982 - 31255.631: 99.1557% ( 7) 00:08:28.823 31255.631 - 31457.280: 99.1935% ( 3) 00:08:28.823 40531.495 - 40733.145: 99.2188% ( 2) 00:08:28.823 40733.145 - 40934.794: 99.3196% ( 8) 00:08:28.823 40934.794 - 41136.443: 99.4330% ( 9) 00:08:28.823 41136.443 - 41338.092: 99.5464% ( 9) 00:08:28.823 41338.092 - 41539.742: 99.6472% ( 8) 00:08:28.823 41539.742 - 41741.391: 99.7354% ( 7) 00:08:28.823 41741.391 - 41943.040: 99.8488% ( 9) 00:08:28.823 41943.040 - 42144.689: 99.9496% ( 8) 00:08:28.823 42144.689 - 42346.338: 100.0000% ( 4) 00:08:28.823 00:08:28.823 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:28.823 ============================================================================== 00:08:28.823 Range in us Cumulative IO count 00:08:28.823 7965.145 - 8015.557: 0.0504% ( 4) 00:08:28.823 8015.557 - 8065.969: 0.0882% ( 3) 00:08:28.823 8065.969 - 8116.382: 0.1512% ( 5) 00:08:28.823 8116.382 - 8166.794: 0.1890% ( 3) 00:08:28.823 8166.794 - 8217.206: 0.2394% ( 4) 00:08:28.823 8217.206 - 8267.618: 0.2772% ( 3) 00:08:28.823 8267.618 - 8318.031: 0.3276% ( 4) 00:08:28.823 8318.031 - 8368.443: 0.3780% ( 4) 00:08:28.823 8368.443 - 8418.855: 0.4158% ( 3) 00:08:28.823 8418.855 - 8469.268: 0.4662% ( 4) 00:08:28.823 8469.268 - 8519.680: 0.5040% ( 3) 00:08:28.823 8519.680 - 8570.092: 0.5544% ( 4) 00:08:28.823 8570.092 - 8620.505: 0.5922% ( 3) 00:08:28.823 8620.505 - 8670.917: 0.6300% ( 3) 00:08:28.823 8670.917 - 8721.329: 0.6678% ( 3) 00:08:28.823 8721.329 - 8771.742: 0.7182% ( 4) 00:08:28.823 8771.742 - 8822.154: 0.7560% ( 3) 00:08:28.823 8822.154 - 8872.566: 0.7939% ( 3) 00:08:28.823 8872.566 - 8922.978: 0.8065% ( 1) 00:08:28.823 12603.077 - 12653.489: 0.8191% ( 1) 00:08:28.823 12653.489 - 12703.902: 0.8569% ( 3) 00:08:28.823 12703.902 - 12754.314: 0.8821% ( 2) 00:08:28.823 12754.314 - 12804.726: 0.9199% ( 3) 00:08:28.823 12804.726 - 12855.138: 0.9577% ( 3) 00:08:28.823 12855.138 - 12905.551: 1.0333% ( 6) 00:08:28.823 12905.551 - 13006.375: 1.1467% ( 9) 00:08:28.823 13006.375 - 13107.200: 1.2853% ( 11) 00:08:28.823 13107.200 - 13208.025: 1.4743% ( 15) 00:08:28.823 13208.025 - 13308.849: 1.6255% ( 12) 00:08:28.823 13308.849 - 13409.674: 1.8145% ( 15) 00:08:28.823 13409.674 - 13510.498: 2.1421% ( 26) 00:08:28.823 13510.498 - 13611.323: 2.5958% ( 36) 00:08:28.823 13611.323 - 13712.148: 3.1754% ( 46) 00:08:28.823 13712.148 - 13812.972: 4.1583% ( 78) 00:08:28.823 13812.972 - 13913.797: 5.1285% ( 77) 00:08:28.823 13913.797 - 14014.622: 6.2878% ( 92) 00:08:28.823 14014.622 - 14115.446: 7.7747% ( 118) 00:08:28.823 14115.446 - 14216.271: 9.2364% ( 116) 00:08:28.823 14216.271 - 14317.095: 11.0887% ( 147) 00:08:28.823 14317.095 - 14417.920: 13.4073% ( 184) 00:08:28.823 14417.920 - 14518.745: 16.3180% ( 231) 00:08:28.823 14518.745 - 14619.569: 20.0857% ( 299) 00:08:28.823 14619.569 - 14720.394: 24.1431% ( 322) 00:08:28.823 14720.394 - 14821.218: 28.2888% ( 329) 00:08:28.823 14821.218 - 14922.043: 32.4723% ( 332) 00:08:28.823 14922.043 - 15022.868: 36.6809% ( 334) 00:08:28.823 15022.868 - 15123.692: 40.5242% ( 305) 00:08:28.823 15123.692 - 15224.517: 44.4178% ( 309) 00:08:28.823 15224.517 - 15325.342: 48.2107% ( 301) 00:08:28.823 15325.342 - 15426.166: 51.9279% ( 295) 00:08:28.823 15426.166 - 15526.991: 55.4309% ( 278) 00:08:28.823 15526.991 - 15627.815: 58.6190% ( 253) 00:08:28.823 15627.815 - 15728.640: 61.4289% ( 223) 00:08:28.823 15728.640 - 15829.465: 64.0877% ( 211) 00:08:28.823 15829.465 - 15930.289: 66.3432% ( 179) 00:08:28.823 15930.289 - 16031.114: 68.3972% ( 163) 00:08:28.823 16031.114 - 16131.938: 70.2117% ( 144) 00:08:28.823 16131.938 - 16232.763: 72.2908% ( 165) 00:08:28.823 16232.763 - 16333.588: 74.1557% ( 148) 00:08:28.823 16333.588 - 16434.412: 75.9451% ( 142) 00:08:28.823 16434.412 - 16535.237: 77.5832% ( 130) 00:08:28.823 16535.237 - 16636.062: 79.2969% ( 136) 00:08:28.823 16636.062 - 16736.886: 80.9224% ( 129) 00:08:28.823 16736.886 - 16837.711: 82.3085% ( 110) 00:08:28.823 16837.711 - 16938.535: 83.5055% ( 95) 00:08:28.823 16938.535 - 17039.360: 84.5388% ( 82) 00:08:28.823 17039.360 - 17140.185: 85.4713% ( 74) 00:08:28.823 17140.185 - 17241.009: 86.2147% ( 59) 00:08:28.823 17241.009 - 17341.834: 86.8952% ( 54) 00:08:28.823 17341.834 - 17442.658: 87.4622% ( 45) 00:08:28.823 17442.658 - 17543.483: 87.8906% ( 34) 00:08:28.823 17543.483 - 17644.308: 88.2308% ( 27) 00:08:28.823 17644.308 - 17745.132: 88.4955% ( 21) 00:08:28.823 17745.132 - 17845.957: 88.8105% ( 25) 00:08:28.823 17845.957 - 17946.782: 89.0751% ( 21) 00:08:28.823 17946.782 - 18047.606: 89.2263% ( 12) 00:08:28.823 18047.606 - 18148.431: 89.3649% ( 11) 00:08:28.823 18148.431 - 18249.255: 89.4909% ( 10) 00:08:28.823 18249.255 - 18350.080: 89.6169% ( 10) 00:08:28.823 18350.080 - 18450.905: 89.8059% ( 15) 00:08:28.823 18450.905 - 18551.729: 90.1336% ( 26) 00:08:28.823 18551.729 - 18652.554: 90.4486% ( 25) 00:08:28.823 18652.554 - 18753.378: 90.8014% ( 28) 00:08:28.823 18753.378 - 18854.203: 91.1668% ( 29) 00:08:28.823 18854.203 - 18955.028: 91.4315% ( 21) 00:08:28.823 18955.028 - 19055.852: 91.7465% ( 25) 00:08:28.823 19055.852 - 19156.677: 92.0237% ( 22) 00:08:28.823 19156.677 - 19257.502: 92.2631% ( 19) 00:08:28.823 19257.502 - 19358.326: 92.5529% ( 23) 00:08:28.823 19358.326 - 19459.151: 92.8427% ( 23) 00:08:28.823 19459.151 - 19559.975: 93.1578% ( 25) 00:08:28.823 19559.975 - 19660.800: 93.5106% ( 28) 00:08:28.823 19660.800 - 19761.625: 93.8886% ( 30) 00:08:28.823 19761.625 - 19862.449: 94.1910% ( 24) 00:08:28.823 19862.449 - 19963.274: 94.3674% ( 14) 00:08:28.823 19963.274 - 20064.098: 94.5439% ( 14) 00:08:28.823 20064.098 - 20164.923: 94.7203% ( 14) 00:08:28.823 20164.923 - 20265.748: 94.8841% ( 13) 00:08:28.823 20265.748 - 20366.572: 95.0227% ( 11) 00:08:28.823 20366.572 - 20467.397: 95.1739% ( 12) 00:08:28.823 20467.397 - 20568.222: 95.3377% ( 13) 00:08:28.823 20568.222 - 20669.046: 95.5771% ( 19) 00:08:28.823 20669.046 - 20769.871: 95.7031% ( 10) 00:08:28.823 20769.871 - 20870.695: 95.8417% ( 11) 00:08:28.823 20870.695 - 20971.520: 95.9677% ( 10) 00:08:28.823 20971.520 - 21072.345: 96.1064% ( 11) 00:08:28.823 21072.345 - 21173.169: 96.2324% ( 10) 00:08:28.823 21173.169 - 21273.994: 96.3332% ( 8) 00:08:28.823 21273.994 - 21374.818: 96.4214% ( 7) 00:08:28.823 21374.818 - 21475.643: 96.4718% ( 4) 00:08:28.823 21475.643 - 21576.468: 96.5474% ( 6) 00:08:28.823 21576.468 - 21677.292: 96.6482% ( 8) 00:08:28.823 21677.292 - 21778.117: 96.7616% ( 9) 00:08:28.823 21778.117 - 21878.942: 96.8750% ( 9) 00:08:28.823 21878.942 - 21979.766: 96.9758% ( 8) 00:08:28.823 21979.766 - 22080.591: 97.0640% ( 7) 00:08:28.823 22080.591 - 22181.415: 97.1144% ( 4) 00:08:28.823 22181.415 - 22282.240: 97.1900% ( 6) 00:08:28.823 22282.240 - 22383.065: 97.2278% ( 3) 00:08:28.823 22383.065 - 22483.889: 97.3034% ( 6) 00:08:28.823 22483.889 - 22584.714: 97.3916% ( 7) 00:08:28.823 22584.714 - 22685.538: 97.4672% ( 6) 00:08:28.823 22685.538 - 22786.363: 97.5428% ( 6) 00:08:28.823 22786.363 - 22887.188: 97.6310% ( 7) 00:08:28.823 22887.188 - 22988.012: 97.7319% ( 8) 00:08:28.823 22988.012 - 23088.837: 97.8453% ( 9) 00:08:28.823 23088.837 - 23189.662: 97.9461% ( 8) 00:08:28.823 23189.662 - 23290.486: 98.0091% ( 5) 00:08:28.823 23290.486 - 23391.311: 98.0721% ( 5) 00:08:28.823 23391.311 - 23492.135: 98.1351% ( 5) 00:08:28.823 23492.135 - 23592.960: 98.1981% ( 5) 00:08:28.823 23592.960 - 23693.785: 98.2611% ( 5) 00:08:28.823 23693.785 - 23794.609: 98.3115% ( 4) 00:08:28.823 23794.609 - 23895.434: 98.3619% ( 4) 00:08:28.823 23895.434 - 23996.258: 98.3871% ( 2) 00:08:28.823 29642.437 - 29844.086: 98.3997% ( 1) 00:08:28.823 29844.086 - 30045.735: 98.4879% ( 7) 00:08:28.823 30045.735 - 30247.385: 98.5635% ( 6) 00:08:28.823 30247.385 - 30449.034: 98.6391% ( 6) 00:08:28.823 30449.034 - 30650.683: 98.7273% ( 7) 00:08:28.823 30650.683 - 30852.332: 98.8155% ( 7) 00:08:28.823 30852.332 - 31053.982: 98.9163% ( 8) 00:08:28.823 31053.982 - 31255.631: 99.0045% ( 7) 00:08:28.823 31255.631 - 31457.280: 99.0927% ( 7) 00:08:28.823 31457.280 - 31658.929: 99.1809% ( 7) 00:08:28.823 31658.929 - 31860.578: 99.1935% ( 1) 00:08:28.823 41539.742 - 41741.391: 99.2314% ( 3) 00:08:28.823 41741.391 - 41943.040: 99.3070% ( 6) 00:08:28.823 41943.040 - 42144.689: 99.3826% ( 6) 00:08:28.823 42144.689 - 42346.338: 99.4582% ( 6) 00:08:28.823 42346.338 - 42547.988: 99.5086% ( 4) 00:08:28.823 42547.988 - 42749.637: 99.6094% ( 8) 00:08:28.824 42749.637 - 42951.286: 99.7228% ( 9) 00:08:28.824 42951.286 - 43152.935: 99.8236% ( 8) 00:08:28.824 43152.935 - 43354.585: 99.9496% ( 10) 00:08:28.824 43354.585 - 43556.234: 100.0000% ( 4) 00:08:28.824 00:08:28.824 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:28.824 ============================================================================== 00:08:28.824 Range in us Cumulative IO count 00:08:28.824 6452.775 - 6503.188: 0.0756% ( 6) 00:08:28.824 6503.188 - 6553.600: 0.1386% ( 5) 00:08:28.824 6553.600 - 6604.012: 0.1638% ( 2) 00:08:28.824 6604.012 - 6654.425: 0.2268% ( 5) 00:08:28.824 6654.425 - 6704.837: 0.2772% ( 4) 00:08:28.824 6704.837 - 6755.249: 0.3150% ( 3) 00:08:28.824 6755.249 - 6805.662: 0.3654% ( 4) 00:08:28.824 6805.662 - 6856.074: 0.4032% ( 3) 00:08:28.824 6856.074 - 6906.486: 0.4536% ( 4) 00:08:28.824 6906.486 - 6956.898: 0.4914% ( 3) 00:08:28.824 6956.898 - 7007.311: 0.5292% ( 3) 00:08:28.824 7007.311 - 7057.723: 0.5796% ( 4) 00:08:28.824 7057.723 - 7108.135: 0.6048% ( 2) 00:08:28.824 7108.135 - 7158.548: 0.6300% ( 2) 00:08:28.824 7158.548 - 7208.960: 0.6804% ( 4) 00:08:28.824 7208.960 - 7259.372: 0.7182% ( 3) 00:08:28.824 7259.372 - 7309.785: 0.7686% ( 4) 00:08:28.824 7309.785 - 7360.197: 0.7939% ( 2) 00:08:28.824 7360.197 - 7410.609: 0.8065% ( 1) 00:08:28.824 12855.138 - 12905.551: 0.8317% ( 2) 00:08:28.824 12905.551 - 13006.375: 0.8947% ( 5) 00:08:28.824 13006.375 - 13107.200: 1.0081% ( 9) 00:08:28.824 13107.200 - 13208.025: 1.2349% ( 18) 00:08:28.824 13208.025 - 13308.849: 1.4365% ( 16) 00:08:28.824 13308.849 - 13409.674: 1.7137% ( 22) 00:08:28.824 13409.674 - 13510.498: 2.1925% ( 38) 00:08:28.824 13510.498 - 13611.323: 2.8604% ( 53) 00:08:28.824 13611.323 - 13712.148: 3.6416% ( 62) 00:08:28.824 13712.148 - 13812.972: 4.5237% ( 70) 00:08:28.824 13812.972 - 13913.797: 5.5570% ( 82) 00:08:28.824 13913.797 - 14014.622: 6.8170% ( 100) 00:08:28.824 14014.622 - 14115.446: 8.5559% ( 138) 00:08:28.824 14115.446 - 14216.271: 10.4083% ( 147) 00:08:28.824 14216.271 - 14317.095: 12.3362% ( 153) 00:08:28.824 14317.095 - 14417.920: 14.7303% ( 190) 00:08:28.824 14417.920 - 14518.745: 17.3009% ( 204) 00:08:28.824 14518.745 - 14619.569: 20.6401% ( 265) 00:08:28.824 14619.569 - 14720.394: 24.2440% ( 286) 00:08:28.824 14720.394 - 14821.218: 27.8352% ( 285) 00:08:28.824 14821.218 - 14922.043: 31.5776% ( 297) 00:08:28.824 14922.043 - 15022.868: 35.3957% ( 303) 00:08:28.824 15022.868 - 15123.692: 39.2767% ( 308) 00:08:28.824 15123.692 - 15224.517: 43.2334% ( 314) 00:08:28.824 15224.517 - 15325.342: 46.9506% ( 295) 00:08:28.824 15325.342 - 15426.166: 50.3780% ( 272) 00:08:28.824 15426.166 - 15526.991: 53.9819% ( 286) 00:08:28.824 15526.991 - 15627.815: 56.9934% ( 239) 00:08:28.824 15627.815 - 15728.640: 60.1815% ( 253) 00:08:28.824 15728.640 - 15829.465: 62.9914% ( 223) 00:08:28.824 15829.465 - 15930.289: 65.3730% ( 189) 00:08:28.824 15930.289 - 16031.114: 67.7545% ( 189) 00:08:28.824 16031.114 - 16131.938: 69.8841% ( 169) 00:08:28.824 16131.938 - 16232.763: 71.6986% ( 144) 00:08:28.824 16232.763 - 16333.588: 73.4879% ( 142) 00:08:28.824 16333.588 - 16434.412: 75.2520% ( 140) 00:08:28.824 16434.412 - 16535.237: 76.6129% ( 108) 00:08:28.824 16535.237 - 16636.062: 78.1502% ( 122) 00:08:28.824 16636.062 - 16736.886: 79.5111% ( 108) 00:08:28.824 16736.886 - 16837.711: 80.9728% ( 116) 00:08:28.824 16837.711 - 16938.535: 82.2329% ( 100) 00:08:28.824 16938.535 - 17039.360: 83.2409% ( 80) 00:08:28.824 17039.360 - 17140.185: 84.1104% ( 69) 00:08:28.824 17140.185 - 17241.009: 84.9798% ( 69) 00:08:28.824 17241.009 - 17341.834: 85.8241% ( 67) 00:08:28.824 17341.834 - 17442.658: 86.5927% ( 61) 00:08:28.824 17442.658 - 17543.483: 87.3866% ( 63) 00:08:28.824 17543.483 - 17644.308: 88.2182% ( 66) 00:08:28.824 17644.308 - 17745.132: 88.8861% ( 53) 00:08:28.824 17745.132 - 17845.957: 89.4279% ( 43) 00:08:28.824 17845.957 - 17946.782: 89.9698% ( 43) 00:08:28.824 17946.782 - 18047.606: 90.3730% ( 32) 00:08:28.824 18047.606 - 18148.431: 90.8140% ( 35) 00:08:28.824 18148.431 - 18249.255: 91.1164% ( 24) 00:08:28.824 18249.255 - 18350.080: 91.4189% ( 24) 00:08:28.824 18350.080 - 18450.905: 91.7717% ( 28) 00:08:28.824 18450.905 - 18551.729: 92.0363% ( 21) 00:08:28.824 18551.729 - 18652.554: 92.3387% ( 24) 00:08:28.824 18652.554 - 18753.378: 92.6033% ( 21) 00:08:28.824 18753.378 - 18854.203: 92.7419% ( 11) 00:08:28.824 18854.203 - 18955.028: 92.8427% ( 8) 00:08:28.824 18955.028 - 19055.852: 92.9309% ( 7) 00:08:28.824 19055.852 - 19156.677: 93.0318% ( 8) 00:08:28.824 19156.677 - 19257.502: 93.1326% ( 8) 00:08:28.824 19257.502 - 19358.326: 93.2082% ( 6) 00:08:28.824 19358.326 - 19459.151: 93.3090% ( 8) 00:08:28.824 19459.151 - 19559.975: 93.4602% ( 12) 00:08:28.824 19559.975 - 19660.800: 93.6618% ( 16) 00:08:28.824 19660.800 - 19761.625: 93.8886% ( 18) 00:08:28.824 19761.625 - 19862.449: 94.1028% ( 17) 00:08:28.824 19862.449 - 19963.274: 94.3170% ( 17) 00:08:28.824 19963.274 - 20064.098: 94.5060% ( 15) 00:08:28.824 20064.098 - 20164.923: 94.7077% ( 16) 00:08:28.824 20164.923 - 20265.748: 94.8715% ( 13) 00:08:28.824 20265.748 - 20366.572: 95.0605% ( 15) 00:08:28.824 20366.572 - 20467.397: 95.2621% ( 16) 00:08:28.824 20467.397 - 20568.222: 95.5015% ( 19) 00:08:28.824 20568.222 - 20669.046: 95.7661% ( 21) 00:08:28.824 20669.046 - 20769.871: 96.0307% ( 21) 00:08:28.824 20769.871 - 20870.695: 96.2072% ( 14) 00:08:28.824 20870.695 - 20971.520: 96.2954% ( 7) 00:08:28.824 20971.520 - 21072.345: 96.3584% ( 5) 00:08:28.824 21072.345 - 21173.169: 96.4214% ( 5) 00:08:28.824 21173.169 - 21273.994: 96.4844% ( 5) 00:08:28.824 21273.994 - 21374.818: 96.5600% ( 6) 00:08:28.824 21374.818 - 21475.643: 96.6230% ( 5) 00:08:28.824 21475.643 - 21576.468: 96.6608% ( 3) 00:08:28.824 21576.468 - 21677.292: 96.7238% ( 5) 00:08:28.824 21677.292 - 21778.117: 96.7616% ( 3) 00:08:28.824 21778.117 - 21878.942: 96.7994% ( 3) 00:08:28.824 21878.942 - 21979.766: 96.8372% ( 3) 00:08:28.824 21979.766 - 22080.591: 96.8876% ( 4) 00:08:28.824 22080.591 - 22181.415: 96.9506% ( 5) 00:08:28.824 22181.415 - 22282.240: 97.0010% ( 4) 00:08:28.824 22282.240 - 22383.065: 97.0514% ( 4) 00:08:28.824 22383.065 - 22483.889: 97.0892% ( 3) 00:08:28.824 22483.889 - 22584.714: 97.1396% ( 4) 00:08:28.824 22584.714 - 22685.538: 97.2278% ( 7) 00:08:28.824 22685.538 - 22786.363: 97.3160% ( 7) 00:08:28.824 22786.363 - 22887.188: 97.4294% ( 9) 00:08:28.824 22887.188 - 22988.012: 97.5428% ( 9) 00:08:28.824 22988.012 - 23088.837: 97.6436% ( 8) 00:08:28.824 23088.837 - 23189.662: 97.7571% ( 9) 00:08:28.824 23189.662 - 23290.486: 97.8705% ( 9) 00:08:28.824 23290.486 - 23391.311: 97.9713% ( 8) 00:08:28.824 23391.311 - 23492.135: 98.0847% ( 9) 00:08:28.824 23492.135 - 23592.960: 98.1603% ( 6) 00:08:28.824 23592.960 - 23693.785: 98.2107% ( 4) 00:08:28.824 23693.785 - 23794.609: 98.2737% ( 5) 00:08:28.824 23794.609 - 23895.434: 98.3367% ( 5) 00:08:28.824 23895.434 - 23996.258: 98.3871% ( 4) 00:08:28.824 28835.840 - 29037.489: 98.4375% ( 4) 00:08:28.824 29037.489 - 29239.138: 98.5257% ( 7) 00:08:28.824 29239.138 - 29440.788: 98.6139% ( 7) 00:08:28.824 29440.788 - 29642.437: 98.6895% ( 6) 00:08:28.824 29642.437 - 29844.086: 98.7903% ( 8) 00:08:28.824 29844.086 - 30045.735: 98.8659% ( 6) 00:08:28.824 30045.735 - 30247.385: 98.9667% ( 8) 00:08:28.824 30247.385 - 30449.034: 99.0549% ( 7) 00:08:28.824 30449.034 - 30650.683: 99.1557% ( 8) 00:08:28.824 30650.683 - 30852.332: 99.1935% ( 3) 00:08:28.824 41136.443 - 41338.092: 99.2566% ( 5) 00:08:28.824 41338.092 - 41539.742: 99.3574% ( 8) 00:08:28.824 41539.742 - 41741.391: 99.4456% ( 7) 00:08:28.824 41741.391 - 41943.040: 99.5464% ( 8) 00:08:28.824 41943.040 - 42144.689: 99.6598% ( 9) 00:08:28.824 42144.689 - 42346.338: 99.7606% ( 8) 00:08:28.824 42346.338 - 42547.988: 99.8740% ( 9) 00:08:28.824 42547.988 - 42749.637: 99.9874% ( 9) 00:08:28.825 42749.637 - 42951.286: 100.0000% ( 1) 00:08:28.825 00:08:28.825 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:28.825 ============================================================================== 00:08:28.825 Range in us Cumulative IO count 00:08:28.825 5293.292 - 5318.498: 0.0126% ( 1) 00:08:28.825 5318.498 - 5343.705: 0.0378% ( 2) 00:08:28.825 5343.705 - 5368.911: 0.0504% ( 1) 00:08:28.825 5368.911 - 5394.117: 0.0882% ( 3) 00:08:28.825 5394.117 - 5419.323: 0.1134% ( 2) 00:08:28.825 5419.323 - 5444.529: 0.1260% ( 1) 00:08:28.825 5444.529 - 5469.735: 0.1512% ( 2) 00:08:28.825 5469.735 - 5494.942: 0.1764% ( 2) 00:08:28.825 5494.942 - 5520.148: 0.1890% ( 1) 00:08:28.825 5520.148 - 5545.354: 0.2142% ( 2) 00:08:28.825 5545.354 - 5570.560: 0.2268% ( 1) 00:08:28.825 5570.560 - 5595.766: 0.2520% ( 2) 00:08:28.825 5595.766 - 5620.972: 0.2772% ( 2) 00:08:28.825 5620.972 - 5646.178: 0.2898% ( 1) 00:08:28.825 5646.178 - 5671.385: 0.3150% ( 2) 00:08:28.825 5671.385 - 5696.591: 0.3402% ( 2) 00:08:28.825 5696.591 - 5721.797: 0.3528% ( 1) 00:08:28.825 5721.797 - 5747.003: 0.3780% ( 2) 00:08:28.825 5747.003 - 5772.209: 0.4032% ( 2) 00:08:28.825 5772.209 - 5797.415: 0.4284% ( 2) 00:08:28.825 5797.415 - 5822.622: 0.4410% ( 1) 00:08:28.825 5822.622 - 5847.828: 0.4662% ( 2) 00:08:28.825 5847.828 - 5873.034: 0.4914% ( 2) 00:08:28.825 5873.034 - 5898.240: 0.5040% ( 1) 00:08:28.825 5898.240 - 5923.446: 0.5292% ( 2) 00:08:28.825 5923.446 - 5948.652: 0.5544% ( 2) 00:08:28.825 5948.652 - 5973.858: 0.5796% ( 2) 00:08:28.825 5973.858 - 5999.065: 0.5922% ( 1) 00:08:28.825 5999.065 - 6024.271: 0.6174% ( 2) 00:08:28.825 6024.271 - 6049.477: 0.6426% ( 2) 00:08:28.825 6049.477 - 6074.683: 0.6552% ( 1) 00:08:28.825 6074.683 - 6099.889: 0.6804% ( 2) 00:08:28.825 6099.889 - 6125.095: 0.7056% ( 2) 00:08:28.825 6125.095 - 6150.302: 0.7182% ( 1) 00:08:28.825 6150.302 - 6175.508: 0.7434% ( 2) 00:08:28.825 6175.508 - 6200.714: 0.7560% ( 1) 00:08:28.825 6200.714 - 6225.920: 0.7812% ( 2) 00:08:28.825 6225.920 - 6251.126: 0.7939% ( 1) 00:08:28.825 6251.126 - 6276.332: 0.8065% ( 1) 00:08:28.825 12804.726 - 12855.138: 0.8695% ( 5) 00:08:28.825 12855.138 - 12905.551: 0.9577% ( 7) 00:08:28.825 12905.551 - 13006.375: 1.1341% ( 14) 00:08:28.825 13006.375 - 13107.200: 1.2475% ( 9) 00:08:28.825 13107.200 - 13208.025: 1.4743% ( 18) 00:08:28.825 13208.025 - 13308.849: 1.6633% ( 15) 00:08:28.825 13308.849 - 13409.674: 1.9027% ( 19) 00:08:28.825 13409.674 - 13510.498: 2.2303% ( 26) 00:08:28.825 13510.498 - 13611.323: 2.6840% ( 36) 00:08:28.825 13611.323 - 13712.148: 3.4526% ( 61) 00:08:28.825 13712.148 - 13812.972: 4.3347% ( 70) 00:08:28.825 13812.972 - 13913.797: 5.5948% ( 100) 00:08:28.825 13913.797 - 14014.622: 7.1447% ( 123) 00:08:28.825 14014.622 - 14115.446: 8.8458% ( 135) 00:08:28.825 14115.446 - 14216.271: 10.6855% ( 146) 00:08:28.825 14216.271 - 14317.095: 12.7268% ( 162) 00:08:28.825 14317.095 - 14417.920: 15.3982% ( 212) 00:08:28.825 14417.920 - 14518.745: 18.5862% ( 253) 00:08:28.825 14518.745 - 14619.569: 21.6860% ( 246) 00:08:28.825 14619.569 - 14720.394: 25.0000% ( 263) 00:08:28.825 14720.394 - 14821.218: 28.8936% ( 309) 00:08:28.825 14821.218 - 14922.043: 33.1149% ( 335) 00:08:28.825 14922.043 - 15022.868: 36.9456% ( 304) 00:08:28.825 15022.868 - 15123.692: 40.4738% ( 280) 00:08:28.825 15123.692 - 15224.517: 43.9894% ( 279) 00:08:28.825 15224.517 - 15325.342: 47.3916% ( 270) 00:08:28.825 15325.342 - 15426.166: 50.8695% ( 276) 00:08:28.825 15426.166 - 15526.991: 54.1709% ( 262) 00:08:28.825 15526.991 - 15627.815: 57.2707% ( 246) 00:08:28.825 15627.815 - 15728.640: 60.2445% ( 236) 00:08:28.825 15728.640 - 15829.465: 62.7646% ( 200) 00:08:28.825 15829.465 - 15930.289: 65.3226% ( 203) 00:08:28.825 15930.289 - 16031.114: 67.7923% ( 196) 00:08:28.825 16031.114 - 16131.938: 69.7077% ( 152) 00:08:28.825 16131.938 - 16232.763: 71.2954% ( 126) 00:08:28.825 16232.763 - 16333.588: 72.9335% ( 130) 00:08:28.825 16333.588 - 16434.412: 74.6346% ( 135) 00:08:28.825 16434.412 - 16535.237: 76.1971% ( 124) 00:08:28.825 16535.237 - 16636.062: 77.6714% ( 117) 00:08:28.825 16636.062 - 16736.886: 78.9945% ( 105) 00:08:28.825 16736.886 - 16837.711: 80.2041% ( 96) 00:08:28.825 16837.711 - 16938.535: 81.4138% ( 96) 00:08:28.825 16938.535 - 17039.360: 82.5983% ( 94) 00:08:28.825 17039.360 - 17140.185: 83.7324% ( 90) 00:08:28.825 17140.185 - 17241.009: 84.7782% ( 83) 00:08:28.825 17241.009 - 17341.834: 85.8367% ( 84) 00:08:28.825 17341.834 - 17442.658: 86.8574% ( 81) 00:08:28.825 17442.658 - 17543.483: 87.8024% ( 75) 00:08:28.825 17543.483 - 17644.308: 88.6215% ( 65) 00:08:28.825 17644.308 - 17745.132: 89.3019% ( 54) 00:08:28.825 17745.132 - 17845.957: 89.8816% ( 46) 00:08:28.825 17845.957 - 17946.782: 90.3100% ( 34) 00:08:28.825 17946.782 - 18047.606: 90.7510% ( 35) 00:08:28.825 18047.606 - 18148.431: 91.1038% ( 28) 00:08:28.825 18148.431 - 18249.255: 91.3432% ( 19) 00:08:28.825 18249.255 - 18350.080: 91.5827% ( 19) 00:08:28.825 18350.080 - 18450.905: 91.7213% ( 11) 00:08:28.825 18450.905 - 18551.729: 91.8095% ( 7) 00:08:28.825 18551.729 - 18652.554: 91.9481% ( 11) 00:08:28.825 18652.554 - 18753.378: 92.1245% ( 14) 00:08:28.825 18753.378 - 18854.203: 92.2505% ( 10) 00:08:28.825 18854.203 - 18955.028: 92.3765% ( 10) 00:08:28.825 18955.028 - 19055.852: 92.5025% ( 10) 00:08:28.825 19055.852 - 19156.677: 92.6285% ( 10) 00:08:28.825 19156.677 - 19257.502: 92.7671% ( 11) 00:08:28.825 19257.502 - 19358.326: 92.9561% ( 15) 00:08:28.825 19358.326 - 19459.151: 93.1578% ( 16) 00:08:28.825 19459.151 - 19559.975: 93.4476% ( 23) 00:08:28.825 19559.975 - 19660.800: 93.8004% ( 28) 00:08:28.825 19660.800 - 19761.625: 94.1280% ( 26) 00:08:28.825 19761.625 - 19862.449: 94.4052% ( 22) 00:08:28.825 19862.449 - 19963.274: 94.6321% ( 18) 00:08:28.825 19963.274 - 20064.098: 94.8463% ( 17) 00:08:28.825 20064.098 - 20164.923: 95.0227% ( 14) 00:08:28.825 20164.923 - 20265.748: 95.1991% ( 14) 00:08:28.825 20265.748 - 20366.572: 95.3881% ( 15) 00:08:28.825 20366.572 - 20467.397: 95.5897% ( 16) 00:08:28.825 20467.397 - 20568.222: 95.7787% ( 15) 00:08:28.825 20568.222 - 20669.046: 95.9299% ( 12) 00:08:28.825 20669.046 - 20769.871: 96.0938% ( 13) 00:08:28.825 20769.871 - 20870.695: 96.2324% ( 11) 00:08:28.825 20870.695 - 20971.520: 96.3080% ( 6) 00:08:28.825 20971.520 - 21072.345: 96.3710% ( 5) 00:08:28.825 21072.345 - 21173.169: 96.4214% ( 4) 00:08:28.825 21173.169 - 21273.994: 96.4844% ( 5) 00:08:28.825 21273.994 - 21374.818: 96.5474% ( 5) 00:08:28.825 21374.818 - 21475.643: 96.6104% ( 5) 00:08:28.825 21475.643 - 21576.468: 96.6734% ( 5) 00:08:28.825 21576.468 - 21677.292: 96.7238% ( 4) 00:08:28.825 21677.292 - 21778.117: 96.7868% ( 5) 00:08:28.825 21778.117 - 21878.942: 96.8246% ( 3) 00:08:28.825 21878.942 - 21979.766: 96.8750% ( 4) 00:08:28.825 21979.766 - 22080.591: 96.9254% ( 4) 00:08:28.825 22080.591 - 22181.415: 96.9758% ( 4) 00:08:28.825 22181.415 - 22282.240: 97.0262% ( 4) 00:08:28.825 22282.240 - 22383.065: 97.0766% ( 4) 00:08:28.825 22383.065 - 22483.889: 97.1270% ( 4) 00:08:28.825 22483.889 - 22584.714: 97.1774% ( 4) 00:08:28.825 22584.714 - 22685.538: 97.2656% ( 7) 00:08:28.825 22685.538 - 22786.363: 97.3538% ( 7) 00:08:28.825 22786.363 - 22887.188: 97.4546% ( 8) 00:08:28.825 22887.188 - 22988.012: 97.5680% ( 9) 00:08:28.825 22988.012 - 23088.837: 97.6689% ( 8) 00:08:28.825 23088.837 - 23189.662: 97.7823% ( 9) 00:08:28.825 23189.662 - 23290.486: 97.8831% ( 8) 00:08:28.825 23290.486 - 23391.311: 97.9965% ( 9) 00:08:28.825 23391.311 - 23492.135: 98.0721% ( 6) 00:08:28.825 23492.135 - 23592.960: 98.1225% ( 4) 00:08:28.825 23592.960 - 23693.785: 98.1855% ( 5) 00:08:28.825 23693.785 - 23794.609: 98.2485% ( 5) 00:08:28.825 23794.609 - 23895.434: 98.3115% ( 5) 00:08:28.825 23895.434 - 23996.258: 98.3619% ( 4) 00:08:28.825 23996.258 - 24097.083: 98.3871% ( 2) 00:08:28.825 28432.542 - 28634.191: 98.4753% ( 7) 00:08:28.825 28634.191 - 28835.840: 98.5635% ( 7) 00:08:28.825 28835.840 - 29037.489: 98.6643% ( 8) 00:08:28.825 29037.489 - 29239.138: 98.7525% ( 7) 00:08:28.825 29239.138 - 29440.788: 98.8407% ( 7) 00:08:28.825 29440.788 - 29642.437: 98.9289% ( 7) 00:08:28.825 29642.437 - 29844.086: 99.0171% ( 7) 00:08:28.825 29844.086 - 30045.735: 99.1179% ( 8) 00:08:28.825 30045.735 - 30247.385: 99.1935% ( 6) 00:08:28.825 41136.443 - 41338.092: 99.2692% ( 6) 00:08:28.825 41338.092 - 41539.742: 99.3574% ( 7) 00:08:28.825 41539.742 - 41741.391: 99.4582% ( 8) 00:08:28.825 41741.391 - 41943.040: 99.5716% ( 9) 00:08:28.825 41943.040 - 42144.689: 99.6850% ( 9) 00:08:28.825 42144.689 - 42346.338: 99.7858% ( 8) 00:08:28.825 42346.338 - 42547.988: 99.8992% ( 9) 00:08:28.825 42547.988 - 42749.637: 100.0000% ( 8) 00:08:28.825 00:08:28.825 00:31:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:29.769 Initializing NVMe Controllers 00:08:29.769 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.769 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:29.769 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:29.769 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:29.769 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:29.769 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:29.769 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:29.769 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:29.769 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:29.769 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:29.769 Initialization complete. Launching workers. 00:08:29.769 ======================================================== 00:08:29.769 Latency(us) 00:08:29.769 Device Information : IOPS MiB/s Average min max 00:08:29.769 PCIE (0000:00:13.0) NSID 1 from core 0: 7998.96 93.74 16023.93 10693.62 39527.85 00:08:29.769 PCIE (0000:00:10.0) NSID 1 from core 0: 7998.96 93.74 16008.65 10005.53 39284.18 00:08:29.769 PCIE (0000:00:11.0) NSID 1 from core 0: 7998.96 93.74 15988.72 9435.94 38348.26 00:08:29.769 PCIE (0000:00:12.0) NSID 1 from core 0: 7998.96 93.74 15969.62 7822.98 38827.62 00:08:29.769 PCIE (0000:00:12.0) NSID 2 from core 0: 7998.96 93.74 15950.58 7392.04 38167.71 00:08:29.769 PCIE (0000:00:12.0) NSID 3 from core 0: 8062.95 94.49 15804.94 6705.22 30205.31 00:08:29.769 ======================================================== 00:08:29.769 Total : 48057.75 563.18 15957.54 6705.22 39527.85 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 13107.200us 00:08:29.769 10.00000% : 14317.095us 00:08:29.769 25.00000% : 14821.218us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16232.763us 00:08:29.769 90.00000% : 18753.378us 00:08:29.769 95.00000% : 19761.625us 00:08:29.769 98.00000% : 20568.222us 00:08:29.769 99.00000% : 32868.825us 00:08:29.769 99.50000% : 38716.652us 00:08:29.769 99.90000% : 39523.249us 00:08:29.769 99.99000% : 39724.898us 00:08:29.769 99.99900% : 39724.898us 00:08:29.769 99.99990% : 39724.898us 00:08:29.769 99.99999% : 39724.898us 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 13107.200us 00:08:29.769 10.00000% : 14216.271us 00:08:29.769 25.00000% : 14720.394us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16434.412us 00:08:29.769 90.00000% : 18450.905us 00:08:29.769 95.00000% : 19660.800us 00:08:29.769 98.00000% : 20769.871us 00:08:29.769 99.00000% : 32263.877us 00:08:29.769 99.50000% : 38313.354us 00:08:29.769 99.90000% : 39119.951us 00:08:29.769 99.99000% : 39321.600us 00:08:29.769 99.99900% : 39321.600us 00:08:29.769 99.99990% : 39321.600us 00:08:29.769 99.99999% : 39321.600us 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 13409.674us 00:08:29.769 10.00000% : 14317.095us 00:08:29.769 25.00000% : 14821.218us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16333.588us 00:08:29.769 90.00000% : 18450.905us 00:08:29.769 95.00000% : 19459.151us 00:08:29.769 98.00000% : 20769.871us 00:08:29.769 99.00000% : 31457.280us 00:08:29.769 99.50000% : 37506.757us 00:08:29.769 99.90000% : 38313.354us 00:08:29.769 99.99000% : 38515.003us 00:08:29.769 99.99900% : 38515.003us 00:08:29.769 99.99990% : 38515.003us 00:08:29.769 99.99999% : 38515.003us 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 12905.551us 00:08:29.769 10.00000% : 14417.920us 00:08:29.769 25.00000% : 14821.218us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16333.588us 00:08:29.769 90.00000% : 18450.905us 00:08:29.769 95.00000% : 19257.502us 00:08:29.769 98.00000% : 21072.345us 00:08:29.769 99.00000% : 31255.631us 00:08:29.769 99.50000% : 38111.705us 00:08:29.769 99.90000% : 38716.652us 00:08:29.769 99.99000% : 38918.302us 00:08:29.769 99.99900% : 38918.302us 00:08:29.769 99.99990% : 38918.302us 00:08:29.769 99.99999% : 38918.302us 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 11897.305us 00:08:29.769 10.00000% : 14417.920us 00:08:29.769 25.00000% : 14922.043us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16333.588us 00:08:29.769 90.00000% : 18350.080us 00:08:29.769 95.00000% : 19559.975us 00:08:29.769 98.00000% : 20870.695us 00:08:29.769 99.00000% : 30247.385us 00:08:29.769 99.50000% : 37305.108us 00:08:29.769 99.90000% : 38111.705us 00:08:29.769 99.99000% : 38313.354us 00:08:29.769 99.99900% : 38313.354us 00:08:29.769 99.99990% : 38313.354us 00:08:29.769 99.99999% : 38313.354us 00:08:29.769 00:08:29.769 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:29.769 ================================================================================= 00:08:29.769 1.00000% : 11443.594us 00:08:29.769 10.00000% : 14317.095us 00:08:29.769 25.00000% : 14821.218us 00:08:29.769 50.00000% : 15426.166us 00:08:29.769 75.00000% : 16333.588us 00:08:29.769 90.00000% : 18652.554us 00:08:29.769 95.00000% : 19660.800us 00:08:29.769 98.00000% : 20870.695us 00:08:29.769 99.00000% : 23492.135us 00:08:29.769 99.50000% : 29440.788us 00:08:29.769 99.90000% : 30045.735us 00:08:29.769 99.99000% : 30247.385us 00:08:29.769 99.99900% : 30247.385us 00:08:29.769 99.99990% : 30247.385us 00:08:29.769 99.99999% : 30247.385us 00:08:29.769 00:08:29.769 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:29.769 ============================================================================== 00:08:29.769 Range in us Cumulative IO count 00:08:29.769 10687.409 - 10737.822: 0.0500% ( 4) 00:08:29.769 10737.822 - 10788.234: 0.0750% ( 2) 00:08:29.769 10788.234 - 10838.646: 0.1125% ( 3) 00:08:29.769 10838.646 - 10889.058: 0.2000% ( 7) 00:08:29.769 10889.058 - 10939.471: 0.2875% ( 7) 00:08:29.769 10939.471 - 10989.883: 0.4875% ( 16) 00:08:29.769 10989.883 - 11040.295: 0.5625% ( 6) 00:08:29.769 11040.295 - 11090.708: 0.5875% ( 2) 00:08:29.769 11090.708 - 11141.120: 0.6375% ( 4) 00:08:29.769 11141.120 - 11191.532: 0.6750% ( 3) 00:08:29.770 11191.532 - 11241.945: 0.7125% ( 3) 00:08:29.770 11241.945 - 11292.357: 0.7500% ( 3) 00:08:29.770 11292.357 - 11342.769: 0.7875% ( 3) 00:08:29.770 11342.769 - 11393.182: 0.8000% ( 1) 00:08:29.770 12855.138 - 12905.551: 0.8375% ( 3) 00:08:29.770 12905.551 - 13006.375: 0.9250% ( 7) 00:08:29.770 13006.375 - 13107.200: 1.0625% ( 11) 00:08:29.770 13107.200 - 13208.025: 1.3750% ( 25) 00:08:29.770 13208.025 - 13308.849: 1.5000% ( 10) 00:08:29.770 13308.849 - 13409.674: 1.5500% ( 4) 00:08:29.770 13409.674 - 13510.498: 1.6500% ( 8) 00:08:29.770 13510.498 - 13611.323: 1.8000% ( 12) 00:08:29.770 13611.323 - 13712.148: 2.2500% ( 36) 00:08:29.770 13712.148 - 13812.972: 2.9750% ( 58) 00:08:29.770 13812.972 - 13913.797: 3.9375% ( 77) 00:08:29.770 13913.797 - 14014.622: 5.6625% ( 138) 00:08:29.770 14014.622 - 14115.446: 8.0750% ( 193) 00:08:29.770 14115.446 - 14216.271: 9.9875% ( 153) 00:08:29.770 14216.271 - 14317.095: 12.4625% ( 198) 00:08:29.770 14317.095 - 14417.920: 15.3000% ( 227) 00:08:29.770 14417.920 - 14518.745: 18.3750% ( 246) 00:08:29.770 14518.745 - 14619.569: 21.4500% ( 246) 00:08:29.770 14619.569 - 14720.394: 24.9125% ( 277) 00:08:29.770 14720.394 - 14821.218: 28.1375% ( 258) 00:08:29.770 14821.218 - 14922.043: 31.4750% ( 267) 00:08:29.770 14922.043 - 15022.868: 35.1375% ( 293) 00:08:29.770 15022.868 - 15123.692: 38.9125% ( 302) 00:08:29.770 15123.692 - 15224.517: 43.2375% ( 346) 00:08:29.770 15224.517 - 15325.342: 47.6750% ( 355) 00:08:29.770 15325.342 - 15426.166: 51.9125% ( 339) 00:08:29.770 15426.166 - 15526.991: 55.3125% ( 272) 00:08:29.770 15526.991 - 15627.815: 58.3125% ( 240) 00:08:29.770 15627.815 - 15728.640: 61.2125% ( 232) 00:08:29.770 15728.640 - 15829.465: 64.0000% ( 223) 00:08:29.770 15829.465 - 15930.289: 67.1625% ( 253) 00:08:29.770 15930.289 - 16031.114: 70.2375% ( 246) 00:08:29.770 16031.114 - 16131.938: 73.2500% ( 241) 00:08:29.770 16131.938 - 16232.763: 75.7125% ( 197) 00:08:29.770 16232.763 - 16333.588: 77.5750% ( 149) 00:08:29.770 16333.588 - 16434.412: 79.1250% ( 124) 00:08:29.770 16434.412 - 16535.237: 80.1125% ( 79) 00:08:29.770 16535.237 - 16636.062: 80.9750% ( 69) 00:08:29.770 16636.062 - 16736.886: 82.1875% ( 97) 00:08:29.770 16736.886 - 16837.711: 82.8000% ( 49) 00:08:29.770 16837.711 - 16938.535: 83.5000% ( 56) 00:08:29.770 16938.535 - 17039.360: 84.1375% ( 51) 00:08:29.770 17039.360 - 17140.185: 84.5875% ( 36) 00:08:29.770 17140.185 - 17241.009: 85.1625% ( 46) 00:08:29.770 17241.009 - 17341.834: 85.4250% ( 21) 00:08:29.770 17341.834 - 17442.658: 85.8000% ( 30) 00:08:29.770 17442.658 - 17543.483: 86.2875% ( 39) 00:08:29.770 17543.483 - 17644.308: 87.0250% ( 59) 00:08:29.770 17644.308 - 17745.132: 87.4000% ( 30) 00:08:29.770 17745.132 - 17845.957: 87.7000% ( 24) 00:08:29.770 17845.957 - 17946.782: 87.8625% ( 13) 00:08:29.770 17946.782 - 18047.606: 88.0000% ( 11) 00:08:29.770 18047.606 - 18148.431: 88.2250% ( 18) 00:08:29.770 18148.431 - 18249.255: 88.4375% ( 17) 00:08:29.770 18249.255 - 18350.080: 88.6375% ( 16) 00:08:29.770 18350.080 - 18450.905: 88.9375% ( 24) 00:08:29.770 18450.905 - 18551.729: 89.2750% ( 27) 00:08:29.770 18551.729 - 18652.554: 89.6375% ( 29) 00:08:29.770 18652.554 - 18753.378: 90.0500% ( 33) 00:08:29.770 18753.378 - 18854.203: 90.5250% ( 38) 00:08:29.770 18854.203 - 18955.028: 91.0500% ( 42) 00:08:29.770 18955.028 - 19055.852: 91.8625% ( 65) 00:08:29.770 19055.852 - 19156.677: 92.3125% ( 36) 00:08:29.770 19156.677 - 19257.502: 92.9875% ( 54) 00:08:29.770 19257.502 - 19358.326: 93.4375% ( 36) 00:08:29.770 19358.326 - 19459.151: 93.8875% ( 36) 00:08:29.770 19459.151 - 19559.975: 94.2250% ( 27) 00:08:29.770 19559.975 - 19660.800: 94.6000% ( 30) 00:08:29.770 19660.800 - 19761.625: 95.1625% ( 45) 00:08:29.770 19761.625 - 19862.449: 95.5500% ( 31) 00:08:29.770 19862.449 - 19963.274: 96.0500% ( 40) 00:08:29.770 19963.274 - 20064.098: 96.3625% ( 25) 00:08:29.770 20064.098 - 20164.923: 96.6875% ( 26) 00:08:29.770 20164.923 - 20265.748: 97.1000% ( 33) 00:08:29.770 20265.748 - 20366.572: 97.3625% ( 21) 00:08:29.770 20366.572 - 20467.397: 97.6375% ( 22) 00:08:29.770 20467.397 - 20568.222: 98.0000% ( 29) 00:08:29.770 20568.222 - 20669.046: 98.2125% ( 17) 00:08:29.770 20669.046 - 20769.871: 98.3625% ( 12) 00:08:29.770 20769.871 - 20870.695: 98.4000% ( 3) 00:08:29.770 31860.578 - 32062.228: 98.5000% ( 8) 00:08:29.770 32062.228 - 32263.877: 98.6375% ( 11) 00:08:29.770 32263.877 - 32465.526: 98.7875% ( 12) 00:08:29.770 32465.526 - 32667.175: 98.8875% ( 8) 00:08:29.770 32667.175 - 32868.825: 99.0000% ( 9) 00:08:29.770 32868.825 - 33070.474: 99.1000% ( 8) 00:08:29.770 33070.474 - 33272.123: 99.2000% ( 8) 00:08:29.770 37305.108 - 37506.757: 99.2500% ( 4) 00:08:29.770 38111.705 - 38313.354: 99.3125% ( 5) 00:08:29.770 38313.354 - 38515.003: 99.4250% ( 9) 00:08:29.770 38515.003 - 38716.652: 99.5250% ( 8) 00:08:29.770 38716.652 - 38918.302: 99.6500% ( 10) 00:08:29.770 38918.302 - 39119.951: 99.7625% ( 9) 00:08:29.770 39119.951 - 39321.600: 99.8875% ( 10) 00:08:29.770 39321.600 - 39523.249: 99.9875% ( 8) 00:08:29.770 39523.249 - 39724.898: 100.0000% ( 1) 00:08:29.770 00:08:29.770 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:29.770 ============================================================================== 00:08:29.770 Range in us Cumulative IO count 00:08:29.770 9981.637 - 10032.049: 0.0125% ( 1) 00:08:29.770 10032.049 - 10082.462: 0.0375% ( 2) 00:08:29.770 10082.462 - 10132.874: 0.1750% ( 11) 00:08:29.770 10132.874 - 10183.286: 0.3625% ( 15) 00:08:29.770 10183.286 - 10233.698: 0.4750% ( 9) 00:08:29.770 10233.698 - 10284.111: 0.5250% ( 4) 00:08:29.770 10284.111 - 10334.523: 0.5375% ( 1) 00:08:29.770 10334.523 - 10384.935: 0.5500% ( 1) 00:08:29.770 10384.935 - 10435.348: 0.5625% ( 1) 00:08:29.770 10435.348 - 10485.760: 0.5875% ( 2) 00:08:29.770 10485.760 - 10536.172: 0.6000% ( 1) 00:08:29.770 10536.172 - 10586.585: 0.6375% ( 3) 00:08:29.770 10586.585 - 10636.997: 0.6500% ( 1) 00:08:29.770 10687.409 - 10737.822: 0.6875% ( 3) 00:08:29.770 10737.822 - 10788.234: 0.7000% ( 1) 00:08:29.770 10788.234 - 10838.646: 0.7250% ( 2) 00:08:29.770 10838.646 - 10889.058: 0.7375% ( 1) 00:08:29.770 10889.058 - 10939.471: 0.7875% ( 4) 00:08:29.770 10939.471 - 10989.883: 0.8000% ( 1) 00:08:29.770 12905.551 - 13006.375: 0.8500% ( 4) 00:08:29.770 13006.375 - 13107.200: 1.0625% ( 17) 00:08:29.770 13107.200 - 13208.025: 1.2000% ( 11) 00:08:29.770 13208.025 - 13308.849: 1.2625% ( 5) 00:08:29.770 13308.849 - 13409.674: 1.4500% ( 15) 00:08:29.770 13409.674 - 13510.498: 1.6875% ( 19) 00:08:29.770 13510.498 - 13611.323: 1.9750% ( 23) 00:08:29.770 13611.323 - 13712.148: 2.3875% ( 33) 00:08:29.770 13712.148 - 13812.972: 3.3750% ( 79) 00:08:29.770 13812.972 - 13913.797: 4.4875% ( 89) 00:08:29.770 13913.797 - 14014.622: 5.9375% ( 116) 00:08:29.770 14014.622 - 14115.446: 7.7750% ( 147) 00:08:29.770 14115.446 - 14216.271: 10.2625% ( 199) 00:08:29.770 14216.271 - 14317.095: 13.4500% ( 255) 00:08:29.770 14317.095 - 14417.920: 16.6500% ( 256) 00:08:29.770 14417.920 - 14518.745: 19.6500% ( 240) 00:08:29.770 14518.745 - 14619.569: 22.2250% ( 206) 00:08:29.770 14619.569 - 14720.394: 25.4750% ( 260) 00:08:29.770 14720.394 - 14821.218: 29.4375% ( 317) 00:08:29.770 14821.218 - 14922.043: 33.7500% ( 345) 00:08:29.770 14922.043 - 15022.868: 37.7625% ( 321) 00:08:29.770 15022.868 - 15123.692: 41.6750% ( 313) 00:08:29.770 15123.692 - 15224.517: 45.3000% ( 290) 00:08:29.770 15224.517 - 15325.342: 48.3375% ( 243) 00:08:29.770 15325.342 - 15426.166: 51.1375% ( 224) 00:08:29.770 15426.166 - 15526.991: 54.3250% ( 255) 00:08:29.770 15526.991 - 15627.815: 57.8250% ( 280) 00:08:29.770 15627.815 - 15728.640: 60.8625% ( 243) 00:08:29.770 15728.640 - 15829.465: 63.4500% ( 207) 00:08:29.770 15829.465 - 15930.289: 66.2500% ( 224) 00:08:29.770 15930.289 - 16031.114: 68.9250% ( 214) 00:08:29.770 16031.114 - 16131.938: 70.9875% ( 165) 00:08:29.770 16131.938 - 16232.763: 72.8375% ( 148) 00:08:29.770 16232.763 - 16333.588: 74.6500% ( 145) 00:08:29.770 16333.588 - 16434.412: 76.3500% ( 136) 00:08:29.770 16434.412 - 16535.237: 77.6750% ( 106) 00:08:29.770 16535.237 - 16636.062: 78.8250% ( 92) 00:08:29.770 16636.062 - 16736.886: 80.1000% ( 102) 00:08:29.770 16736.886 - 16837.711: 81.2125% ( 89) 00:08:29.770 16837.711 - 16938.535: 82.2375% ( 82) 00:08:29.770 16938.535 - 17039.360: 83.0250% ( 63) 00:08:29.770 17039.360 - 17140.185: 83.7250% ( 56) 00:08:29.770 17140.185 - 17241.009: 84.3750% ( 52) 00:08:29.770 17241.009 - 17341.834: 84.9750% ( 48) 00:08:29.770 17341.834 - 17442.658: 85.5250% ( 44) 00:08:29.770 17442.658 - 17543.483: 86.0250% ( 40) 00:08:29.770 17543.483 - 17644.308: 86.6375% ( 49) 00:08:29.770 17644.308 - 17745.132: 86.9625% ( 26) 00:08:29.770 17745.132 - 17845.957: 87.2375% ( 22) 00:08:29.770 17845.957 - 17946.782: 87.6875% ( 36) 00:08:29.770 17946.782 - 18047.606: 88.1250% ( 35) 00:08:29.770 18047.606 - 18148.431: 88.5875% ( 37) 00:08:29.770 18148.431 - 18249.255: 89.0000% ( 33) 00:08:29.770 18249.255 - 18350.080: 89.5000% ( 40) 00:08:29.770 18350.080 - 18450.905: 90.0125% ( 41) 00:08:29.770 18450.905 - 18551.729: 90.4375% ( 34) 00:08:29.770 18551.729 - 18652.554: 91.0125% ( 46) 00:08:29.770 18652.554 - 18753.378: 91.4875% ( 38) 00:08:29.770 18753.378 - 18854.203: 91.9375% ( 36) 00:08:29.770 18854.203 - 18955.028: 92.3750% ( 35) 00:08:29.770 18955.028 - 19055.852: 92.7125% ( 27) 00:08:29.771 19055.852 - 19156.677: 93.2625% ( 44) 00:08:29.771 19156.677 - 19257.502: 93.7250% ( 37) 00:08:29.771 19257.502 - 19358.326: 94.2375% ( 41) 00:08:29.771 19358.326 - 19459.151: 94.4500% ( 17) 00:08:29.771 19459.151 - 19559.975: 94.9000% ( 36) 00:08:29.771 19559.975 - 19660.800: 95.1625% ( 21) 00:08:29.771 19660.800 - 19761.625: 95.4875% ( 26) 00:08:29.771 19761.625 - 19862.449: 95.8875% ( 32) 00:08:29.771 19862.449 - 19963.274: 96.2250% ( 27) 00:08:29.771 19963.274 - 20064.098: 96.4750% ( 20) 00:08:29.771 20064.098 - 20164.923: 96.9375% ( 37) 00:08:29.771 20164.923 - 20265.748: 97.1000% ( 13) 00:08:29.771 20265.748 - 20366.572: 97.3625% ( 21) 00:08:29.771 20366.572 - 20467.397: 97.5625% ( 16) 00:08:29.771 20467.397 - 20568.222: 97.7375% ( 14) 00:08:29.771 20568.222 - 20669.046: 97.9250% ( 15) 00:08:29.771 20669.046 - 20769.871: 98.0750% ( 12) 00:08:29.771 20769.871 - 20870.695: 98.1875% ( 9) 00:08:29.771 20870.695 - 20971.520: 98.2500% ( 5) 00:08:29.771 20971.520 - 21072.345: 98.3125% ( 5) 00:08:29.771 21072.345 - 21173.169: 98.3625% ( 4) 00:08:29.771 21173.169 - 21273.994: 98.4000% ( 3) 00:08:29.771 30852.332 - 31053.982: 98.4625% ( 5) 00:08:29.771 31053.982 - 31255.631: 98.5500% ( 7) 00:08:29.771 31255.631 - 31457.280: 98.6500% ( 8) 00:08:29.771 31457.280 - 31658.929: 98.7250% ( 6) 00:08:29.771 31658.929 - 31860.578: 98.8500% ( 10) 00:08:29.771 31860.578 - 32062.228: 98.9500% ( 8) 00:08:29.771 32062.228 - 32263.877: 99.0375% ( 7) 00:08:29.771 32263.877 - 32465.526: 99.1250% ( 7) 00:08:29.771 32465.526 - 32667.175: 99.2000% ( 6) 00:08:29.771 37506.757 - 37708.406: 99.2500% ( 4) 00:08:29.771 37708.406 - 37910.055: 99.3375% ( 7) 00:08:29.771 37910.055 - 38111.705: 99.4625% ( 10) 00:08:29.771 38111.705 - 38313.354: 99.5000% ( 3) 00:08:29.771 38313.354 - 38515.003: 99.6000% ( 8) 00:08:29.771 38515.003 - 38716.652: 99.7250% ( 10) 00:08:29.771 38716.652 - 38918.302: 99.8125% ( 7) 00:08:29.771 38918.302 - 39119.951: 99.9125% ( 8) 00:08:29.771 39119.951 - 39321.600: 100.0000% ( 7) 00:08:29.771 00:08:29.771 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:29.771 ============================================================================== 00:08:29.771 Range in us Cumulative IO count 00:08:29.771 9427.102 - 9477.514: 0.0375% ( 3) 00:08:29.771 9477.514 - 9527.926: 0.0875% ( 4) 00:08:29.771 9527.926 - 9578.338: 0.1375% ( 4) 00:08:29.771 9578.338 - 9628.751: 0.2125% ( 6) 00:08:29.771 9628.751 - 9679.163: 0.2875% ( 6) 00:08:29.771 9679.163 - 9729.575: 0.3625% ( 6) 00:08:29.771 9729.575 - 9779.988: 0.5000% ( 11) 00:08:29.771 9779.988 - 9830.400: 0.5875% ( 7) 00:08:29.771 9830.400 - 9880.812: 0.6625% ( 6) 00:08:29.771 9880.812 - 9931.225: 0.7125% ( 4) 00:08:29.771 9931.225 - 9981.637: 0.7750% ( 5) 00:08:29.771 9981.637 - 10032.049: 0.8000% ( 2) 00:08:29.771 13208.025 - 13308.849: 0.8500% ( 4) 00:08:29.771 13308.849 - 13409.674: 1.2500% ( 32) 00:08:29.771 13409.674 - 13510.498: 1.6750% ( 34) 00:08:29.771 13510.498 - 13611.323: 1.8500% ( 14) 00:08:29.771 13611.323 - 13712.148: 2.1500% ( 24) 00:08:29.771 13712.148 - 13812.972: 2.5500% ( 32) 00:08:29.771 13812.972 - 13913.797: 3.2250% ( 54) 00:08:29.771 13913.797 - 14014.622: 4.1375% ( 73) 00:08:29.771 14014.622 - 14115.446: 5.4500% ( 105) 00:08:29.771 14115.446 - 14216.271: 7.7250% ( 182) 00:08:29.771 14216.271 - 14317.095: 10.4000% ( 214) 00:08:29.771 14317.095 - 14417.920: 14.0250% ( 290) 00:08:29.771 14417.920 - 14518.745: 17.0750% ( 244) 00:08:29.771 14518.745 - 14619.569: 20.4625% ( 271) 00:08:29.771 14619.569 - 14720.394: 24.2125% ( 300) 00:08:29.771 14720.394 - 14821.218: 28.2500% ( 323) 00:08:29.771 14821.218 - 14922.043: 32.1500% ( 312) 00:08:29.771 14922.043 - 15022.868: 36.3750% ( 338) 00:08:29.771 15022.868 - 15123.692: 41.0375% ( 373) 00:08:29.771 15123.692 - 15224.517: 44.8375% ( 304) 00:08:29.771 15224.517 - 15325.342: 48.9750% ( 331) 00:08:29.771 15325.342 - 15426.166: 52.8250% ( 308) 00:08:29.771 15426.166 - 15526.991: 57.0375% ( 337) 00:08:29.771 15526.991 - 15627.815: 60.4625% ( 274) 00:08:29.771 15627.815 - 15728.640: 63.5750% ( 249) 00:08:29.771 15728.640 - 15829.465: 66.4750% ( 232) 00:08:29.771 15829.465 - 15930.289: 69.0000% ( 202) 00:08:29.771 15930.289 - 16031.114: 71.1250% ( 170) 00:08:29.771 16031.114 - 16131.938: 72.6375% ( 121) 00:08:29.771 16131.938 - 16232.763: 74.0875% ( 116) 00:08:29.771 16232.763 - 16333.588: 75.1500% ( 85) 00:08:29.771 16333.588 - 16434.412: 76.8250% ( 134) 00:08:29.771 16434.412 - 16535.237: 78.3875% ( 125) 00:08:29.771 16535.237 - 16636.062: 79.4500% ( 85) 00:08:29.771 16636.062 - 16736.886: 80.1125% ( 53) 00:08:29.771 16736.886 - 16837.711: 80.9125% ( 64) 00:08:29.771 16837.711 - 16938.535: 81.6750% ( 61) 00:08:29.771 16938.535 - 17039.360: 82.5375% ( 69) 00:08:29.771 17039.360 - 17140.185: 83.0500% ( 41) 00:08:29.771 17140.185 - 17241.009: 83.6000% ( 44) 00:08:29.771 17241.009 - 17341.834: 84.4125% ( 65) 00:08:29.771 17341.834 - 17442.658: 85.1000% ( 55) 00:08:29.771 17442.658 - 17543.483: 86.0750% ( 78) 00:08:29.771 17543.483 - 17644.308: 86.5250% ( 36) 00:08:29.771 17644.308 - 17745.132: 86.8375% ( 25) 00:08:29.771 17745.132 - 17845.957: 87.2500% ( 33) 00:08:29.771 17845.957 - 17946.782: 87.7375% ( 39) 00:08:29.771 17946.782 - 18047.606: 88.3125% ( 46) 00:08:29.771 18047.606 - 18148.431: 88.8125% ( 40) 00:08:29.771 18148.431 - 18249.255: 89.2750% ( 37) 00:08:29.771 18249.255 - 18350.080: 89.7000% ( 34) 00:08:29.771 18350.080 - 18450.905: 90.1625% ( 37) 00:08:29.771 18450.905 - 18551.729: 90.7125% ( 44) 00:08:29.771 18551.729 - 18652.554: 91.1375% ( 34) 00:08:29.771 18652.554 - 18753.378: 91.6000% ( 37) 00:08:29.771 18753.378 - 18854.203: 92.0375% ( 35) 00:08:29.771 18854.203 - 18955.028: 92.5250% ( 39) 00:08:29.771 18955.028 - 19055.852: 92.9875% ( 37) 00:08:29.771 19055.852 - 19156.677: 93.5375% ( 44) 00:08:29.771 19156.677 - 19257.502: 94.1750% ( 51) 00:08:29.771 19257.502 - 19358.326: 94.8375% ( 53) 00:08:29.771 19358.326 - 19459.151: 95.4875% ( 52) 00:08:29.771 19459.151 - 19559.975: 95.8625% ( 30) 00:08:29.771 19559.975 - 19660.800: 96.1750% ( 25) 00:08:29.771 19660.800 - 19761.625: 96.3500% ( 14) 00:08:29.771 19761.625 - 19862.449: 96.4625% ( 9) 00:08:29.771 19862.449 - 19963.274: 96.6500% ( 15) 00:08:29.771 19963.274 - 20064.098: 96.8250% ( 14) 00:08:29.771 20064.098 - 20164.923: 96.9625% ( 11) 00:08:29.771 20164.923 - 20265.748: 97.3250% ( 29) 00:08:29.771 20265.748 - 20366.572: 97.4875% ( 13) 00:08:29.771 20366.572 - 20467.397: 97.6250% ( 11) 00:08:29.771 20467.397 - 20568.222: 97.7625% ( 11) 00:08:29.771 20568.222 - 20669.046: 97.8750% ( 9) 00:08:29.771 20669.046 - 20769.871: 98.1875% ( 25) 00:08:29.771 20769.871 - 20870.695: 98.2875% ( 8) 00:08:29.771 20870.695 - 20971.520: 98.3500% ( 5) 00:08:29.771 20971.520 - 21072.345: 98.4000% ( 4) 00:08:29.771 30045.735 - 30247.385: 98.4125% ( 1) 00:08:29.771 30247.385 - 30449.034: 98.5000% ( 7) 00:08:29.771 30449.034 - 30650.683: 98.6125% ( 9) 00:08:29.771 30650.683 - 30852.332: 98.7125% ( 8) 00:08:29.771 30852.332 - 31053.982: 98.8250% ( 9) 00:08:29.771 31053.982 - 31255.631: 98.9375% ( 9) 00:08:29.771 31255.631 - 31457.280: 99.0500% ( 9) 00:08:29.771 31457.280 - 31658.929: 99.1625% ( 9) 00:08:29.771 31658.929 - 31860.578: 99.2000% ( 3) 00:08:29.771 36700.160 - 36901.809: 99.2125% ( 1) 00:08:29.771 36901.809 - 37103.458: 99.3250% ( 9) 00:08:29.771 37103.458 - 37305.108: 99.4250% ( 8) 00:08:29.771 37305.108 - 37506.757: 99.5375% ( 9) 00:08:29.771 37506.757 - 37708.406: 99.6500% ( 9) 00:08:29.771 37708.406 - 37910.055: 99.7625% ( 9) 00:08:29.771 37910.055 - 38111.705: 99.8625% ( 8) 00:08:29.771 38111.705 - 38313.354: 99.9750% ( 9) 00:08:29.771 38313.354 - 38515.003: 100.0000% ( 2) 00:08:29.771 00:08:29.771 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:29.771 ============================================================================== 00:08:29.771 Range in us Cumulative IO count 00:08:29.771 7813.908 - 7864.320: 0.0125% ( 1) 00:08:29.771 7914.732 - 7965.145: 0.0500% ( 3) 00:08:29.771 7965.145 - 8015.557: 0.0875% ( 3) 00:08:29.771 8015.557 - 8065.969: 0.1625% ( 6) 00:08:29.771 8065.969 - 8116.382: 0.2500% ( 7) 00:08:29.771 8116.382 - 8166.794: 0.4125% ( 13) 00:08:29.771 8166.794 - 8217.206: 0.5500% ( 11) 00:08:29.771 8217.206 - 8267.618: 0.5875% ( 3) 00:08:29.771 8267.618 - 8318.031: 0.6250% ( 3) 00:08:29.771 8318.031 - 8368.443: 0.6625% ( 3) 00:08:29.771 8368.443 - 8418.855: 0.6875% ( 2) 00:08:29.771 8418.855 - 8469.268: 0.7250% ( 3) 00:08:29.771 8469.268 - 8519.680: 0.7625% ( 3) 00:08:29.771 8519.680 - 8570.092: 0.7875% ( 2) 00:08:29.771 8570.092 - 8620.505: 0.8000% ( 1) 00:08:29.771 12703.902 - 12754.314: 0.8250% ( 2) 00:08:29.771 12754.314 - 12804.726: 0.8625% ( 3) 00:08:29.771 12804.726 - 12855.138: 0.9500% ( 7) 00:08:29.771 12855.138 - 12905.551: 1.0250% ( 6) 00:08:29.771 12905.551 - 13006.375: 1.3125% ( 23) 00:08:29.771 13006.375 - 13107.200: 1.4500% ( 11) 00:08:29.771 13107.200 - 13208.025: 1.5250% ( 6) 00:08:29.771 13208.025 - 13308.849: 1.6000% ( 6) 00:08:29.771 13308.849 - 13409.674: 1.6250% ( 2) 00:08:29.771 13409.674 - 13510.498: 1.6375% ( 1) 00:08:29.771 13510.498 - 13611.323: 1.8125% ( 14) 00:08:29.771 13611.323 - 13712.148: 2.1625% ( 28) 00:08:29.771 13712.148 - 13812.972: 2.9000% ( 59) 00:08:29.771 13812.972 - 13913.797: 4.0000% ( 88) 00:08:29.771 13913.797 - 14014.622: 4.9875% ( 79) 00:08:29.772 14014.622 - 14115.446: 6.4375% ( 116) 00:08:29.772 14115.446 - 14216.271: 7.8500% ( 113) 00:08:29.772 14216.271 - 14317.095: 9.8125% ( 157) 00:08:29.772 14317.095 - 14417.920: 12.2250% ( 193) 00:08:29.772 14417.920 - 14518.745: 15.3125% ( 247) 00:08:29.772 14518.745 - 14619.569: 18.5750% ( 261) 00:08:29.772 14619.569 - 14720.394: 22.4875% ( 313) 00:08:29.772 14720.394 - 14821.218: 26.4500% ( 317) 00:08:29.772 14821.218 - 14922.043: 31.6250% ( 414) 00:08:29.772 14922.043 - 15022.868: 36.5875% ( 397) 00:08:29.772 15022.868 - 15123.692: 41.3625% ( 382) 00:08:29.772 15123.692 - 15224.517: 45.0000% ( 291) 00:08:29.772 15224.517 - 15325.342: 48.6625% ( 293) 00:08:29.772 15325.342 - 15426.166: 52.5250% ( 309) 00:08:29.772 15426.166 - 15526.991: 56.2250% ( 296) 00:08:29.772 15526.991 - 15627.815: 60.0625% ( 307) 00:08:29.772 15627.815 - 15728.640: 64.4750% ( 353) 00:08:29.772 15728.640 - 15829.465: 67.6250% ( 252) 00:08:29.772 15829.465 - 15930.289: 70.1375% ( 201) 00:08:29.772 15930.289 - 16031.114: 72.1125% ( 158) 00:08:29.772 16031.114 - 16131.938: 73.7125% ( 128) 00:08:29.772 16131.938 - 16232.763: 74.9250% ( 97) 00:08:29.772 16232.763 - 16333.588: 75.9875% ( 85) 00:08:29.772 16333.588 - 16434.412: 77.3000% ( 105) 00:08:29.772 16434.412 - 16535.237: 78.4875% ( 95) 00:08:29.772 16535.237 - 16636.062: 79.3625% ( 70) 00:08:29.772 16636.062 - 16736.886: 80.2750% ( 73) 00:08:29.772 16736.886 - 16837.711: 81.1375% ( 69) 00:08:29.772 16837.711 - 16938.535: 81.9250% ( 63) 00:08:29.772 16938.535 - 17039.360: 82.7750% ( 68) 00:08:29.772 17039.360 - 17140.185: 83.5250% ( 60) 00:08:29.772 17140.185 - 17241.009: 84.2125% ( 55) 00:08:29.772 17241.009 - 17341.834: 84.7625% ( 44) 00:08:29.772 17341.834 - 17442.658: 85.2500% ( 39) 00:08:29.772 17442.658 - 17543.483: 85.5750% ( 26) 00:08:29.772 17543.483 - 17644.308: 85.9875% ( 33) 00:08:29.772 17644.308 - 17745.132: 86.6500% ( 53) 00:08:29.772 17745.132 - 17845.957: 87.0500% ( 32) 00:08:29.772 17845.957 - 17946.782: 87.3625% ( 25) 00:08:29.772 17946.782 - 18047.606: 87.8125% ( 36) 00:08:29.772 18047.606 - 18148.431: 88.5125% ( 56) 00:08:29.772 18148.431 - 18249.255: 89.1375% ( 50) 00:08:29.772 18249.255 - 18350.080: 89.8625% ( 58) 00:08:29.772 18350.080 - 18450.905: 90.6250% ( 61) 00:08:29.772 18450.905 - 18551.729: 91.2750% ( 52) 00:08:29.772 18551.729 - 18652.554: 91.9000% ( 50) 00:08:29.772 18652.554 - 18753.378: 92.4375% ( 43) 00:08:29.772 18753.378 - 18854.203: 93.0000% ( 45) 00:08:29.772 18854.203 - 18955.028: 93.6000% ( 48) 00:08:29.772 18955.028 - 19055.852: 94.3000% ( 56) 00:08:29.772 19055.852 - 19156.677: 94.6750% ( 30) 00:08:29.772 19156.677 - 19257.502: 95.1000% ( 34) 00:08:29.772 19257.502 - 19358.326: 95.5375% ( 35) 00:08:29.772 19358.326 - 19459.151: 95.8750% ( 27) 00:08:29.772 19459.151 - 19559.975: 96.1125% ( 19) 00:08:29.772 19559.975 - 19660.800: 96.2875% ( 14) 00:08:29.772 19660.800 - 19761.625: 96.4500% ( 13) 00:08:29.772 19761.625 - 19862.449: 96.5500% ( 8) 00:08:29.772 19862.449 - 19963.274: 96.6625% ( 9) 00:08:29.772 19963.274 - 20064.098: 96.8125% ( 12) 00:08:29.772 20064.098 - 20164.923: 96.9625% ( 12) 00:08:29.772 20164.923 - 20265.748: 97.2375% ( 22) 00:08:29.772 20265.748 - 20366.572: 97.4500% ( 17) 00:08:29.772 20366.572 - 20467.397: 97.5125% ( 5) 00:08:29.772 20467.397 - 20568.222: 97.5750% ( 5) 00:08:29.772 20568.222 - 20669.046: 97.6000% ( 2) 00:08:29.772 20669.046 - 20769.871: 97.6750% ( 6) 00:08:29.772 20769.871 - 20870.695: 97.8125% ( 11) 00:08:29.772 20870.695 - 20971.520: 97.9375% ( 10) 00:08:29.772 20971.520 - 21072.345: 98.1250% ( 15) 00:08:29.772 21072.345 - 21173.169: 98.2625% ( 11) 00:08:29.772 21173.169 - 21273.994: 98.3750% ( 9) 00:08:29.772 21273.994 - 21374.818: 98.4000% ( 2) 00:08:29.772 29844.086 - 30045.735: 98.4125% ( 1) 00:08:29.772 30045.735 - 30247.385: 98.5125% ( 8) 00:08:29.772 30247.385 - 30449.034: 98.6250% ( 9) 00:08:29.772 30449.034 - 30650.683: 98.7125% ( 7) 00:08:29.772 30650.683 - 30852.332: 98.8375% ( 10) 00:08:29.772 30852.332 - 31053.982: 98.9500% ( 9) 00:08:29.772 31053.982 - 31255.631: 99.0625% ( 9) 00:08:29.772 31255.631 - 31457.280: 99.1750% ( 9) 00:08:29.772 31457.280 - 31658.929: 99.2000% ( 2) 00:08:29.772 37305.108 - 37506.757: 99.2750% ( 6) 00:08:29.772 37506.757 - 37708.406: 99.3875% ( 9) 00:08:29.772 37708.406 - 37910.055: 99.4875% ( 8) 00:08:29.772 37910.055 - 38111.705: 99.5875% ( 8) 00:08:29.772 38111.705 - 38313.354: 99.7000% ( 9) 00:08:29.772 38313.354 - 38515.003: 99.8250% ( 10) 00:08:29.772 38515.003 - 38716.652: 99.9375% ( 9) 00:08:29.772 38716.652 - 38918.302: 100.0000% ( 5) 00:08:29.772 00:08:29.772 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:29.772 ============================================================================== 00:08:29.772 Range in us Cumulative IO count 00:08:29.772 7360.197 - 7410.609: 0.0250% ( 2) 00:08:29.772 7410.609 - 7461.022: 0.0625% ( 3) 00:08:29.772 7461.022 - 7511.434: 0.0875% ( 2) 00:08:29.772 7511.434 - 7561.846: 0.1500% ( 5) 00:08:29.772 7561.846 - 7612.258: 0.2375% ( 7) 00:08:29.772 7612.258 - 7662.671: 0.3250% ( 7) 00:08:29.772 7662.671 - 7713.083: 0.4125% ( 7) 00:08:29.772 7713.083 - 7763.495: 0.4875% ( 6) 00:08:29.772 7763.495 - 7813.908: 0.5625% ( 6) 00:08:29.772 7813.908 - 7864.320: 0.6375% ( 6) 00:08:29.772 7864.320 - 7914.732: 0.7125% ( 6) 00:08:29.772 7914.732 - 7965.145: 0.7625% ( 4) 00:08:29.772 7965.145 - 8015.557: 0.8000% ( 3) 00:08:29.772 11594.831 - 11645.243: 0.8125% ( 1) 00:08:29.772 11645.243 - 11695.655: 0.8250% ( 1) 00:08:29.772 11695.655 - 11746.068: 0.8750% ( 4) 00:08:29.772 11746.068 - 11796.480: 0.9125% ( 3) 00:08:29.772 11796.480 - 11846.892: 0.9500% ( 3) 00:08:29.772 11846.892 - 11897.305: 1.0000% ( 4) 00:08:29.772 11897.305 - 11947.717: 1.0750% ( 6) 00:08:29.772 11947.717 - 11998.129: 1.2750% ( 16) 00:08:29.772 11998.129 - 12048.542: 1.3375% ( 5) 00:08:29.772 12048.542 - 12098.954: 1.3750% ( 3) 00:08:29.772 12098.954 - 12149.366: 1.4000% ( 2) 00:08:29.772 12149.366 - 12199.778: 1.4375% ( 3) 00:08:29.772 12199.778 - 12250.191: 1.4750% ( 3) 00:08:29.772 12250.191 - 12300.603: 1.5125% ( 3) 00:08:29.772 12300.603 - 12351.015: 1.5375% ( 2) 00:08:29.772 12351.015 - 12401.428: 1.5750% ( 3) 00:08:29.772 12401.428 - 12451.840: 1.6000% ( 2) 00:08:29.772 13308.849 - 13409.674: 1.6625% ( 5) 00:08:29.772 13409.674 - 13510.498: 1.7750% ( 9) 00:08:29.772 13510.498 - 13611.323: 2.0125% ( 19) 00:08:29.772 13611.323 - 13712.148: 2.4125% ( 32) 00:08:29.772 13712.148 - 13812.972: 2.7000% ( 23) 00:08:29.772 13812.972 - 13913.797: 3.2250% ( 42) 00:08:29.772 13913.797 - 14014.622: 3.8750% ( 52) 00:08:29.772 14014.622 - 14115.446: 4.9000% ( 82) 00:08:29.772 14115.446 - 14216.271: 6.4375% ( 123) 00:08:29.772 14216.271 - 14317.095: 8.8625% ( 194) 00:08:29.772 14317.095 - 14417.920: 11.7875% ( 234) 00:08:29.772 14417.920 - 14518.745: 14.6875% ( 232) 00:08:29.772 14518.745 - 14619.569: 18.0500% ( 269) 00:08:29.772 14619.569 - 14720.394: 21.4500% ( 272) 00:08:29.772 14720.394 - 14821.218: 24.8500% ( 272) 00:08:29.772 14821.218 - 14922.043: 29.2250% ( 350) 00:08:29.772 14922.043 - 15022.868: 33.6625% ( 355) 00:08:29.772 15022.868 - 15123.692: 38.7750% ( 409) 00:08:29.772 15123.692 - 15224.517: 43.6250% ( 388) 00:08:29.772 15224.517 - 15325.342: 48.7375% ( 409) 00:08:29.772 15325.342 - 15426.166: 53.4625% ( 378) 00:08:29.772 15426.166 - 15526.991: 57.2250% ( 301) 00:08:29.772 15526.991 - 15627.815: 60.5500% ( 266) 00:08:29.772 15627.815 - 15728.640: 64.1000% ( 284) 00:08:29.772 15728.640 - 15829.465: 66.3875% ( 183) 00:08:29.772 15829.465 - 15930.289: 69.1625% ( 222) 00:08:29.772 15930.289 - 16031.114: 71.3000% ( 171) 00:08:29.772 16031.114 - 16131.938: 73.2625% ( 157) 00:08:29.772 16131.938 - 16232.763: 74.9500% ( 135) 00:08:29.772 16232.763 - 16333.588: 76.5500% ( 128) 00:08:29.772 16333.588 - 16434.412: 78.1000% ( 124) 00:08:29.772 16434.412 - 16535.237: 79.1000% ( 80) 00:08:29.772 16535.237 - 16636.062: 80.0625% ( 77) 00:08:29.772 16636.062 - 16736.886: 81.3375% ( 102) 00:08:29.772 16736.886 - 16837.711: 82.1500% ( 65) 00:08:29.772 16837.711 - 16938.535: 83.1250% ( 78) 00:08:29.772 16938.535 - 17039.360: 84.0125% ( 71) 00:08:29.772 17039.360 - 17140.185: 84.5500% ( 43) 00:08:29.772 17140.185 - 17241.009: 85.3250% ( 62) 00:08:29.772 17241.009 - 17341.834: 85.9250% ( 48) 00:08:29.772 17341.834 - 17442.658: 86.2625% ( 27) 00:08:29.772 17442.658 - 17543.483: 86.5500% ( 23) 00:08:29.772 17543.483 - 17644.308: 86.9500% ( 32) 00:08:29.772 17644.308 - 17745.132: 87.4125% ( 37) 00:08:29.772 17745.132 - 17845.957: 87.9500% ( 43) 00:08:29.772 17845.957 - 17946.782: 88.6625% ( 57) 00:08:29.772 17946.782 - 18047.606: 89.0625% ( 32) 00:08:29.772 18047.606 - 18148.431: 89.4750% ( 33) 00:08:29.772 18148.431 - 18249.255: 89.9250% ( 36) 00:08:29.772 18249.255 - 18350.080: 90.4000% ( 38) 00:08:29.772 18350.080 - 18450.905: 90.8875% ( 39) 00:08:29.772 18450.905 - 18551.729: 91.4125% ( 42) 00:08:29.772 18551.729 - 18652.554: 91.7500% ( 27) 00:08:29.772 18652.554 - 18753.378: 92.0625% ( 25) 00:08:29.772 18753.378 - 18854.203: 92.4250% ( 29) 00:08:29.772 18854.203 - 18955.028: 92.8125% ( 31) 00:08:29.772 18955.028 - 19055.852: 93.2250% ( 33) 00:08:29.772 19055.852 - 19156.677: 93.5625% ( 27) 00:08:29.772 19156.677 - 19257.502: 93.8500% ( 23) 00:08:29.772 19257.502 - 19358.326: 94.4625% ( 49) 00:08:29.773 19358.326 - 19459.151: 94.8125% ( 28) 00:08:29.773 19459.151 - 19559.975: 95.1500% ( 27) 00:08:29.773 19559.975 - 19660.800: 95.4500% ( 24) 00:08:29.773 19660.800 - 19761.625: 95.7000% ( 20) 00:08:29.773 19761.625 - 19862.449: 96.1000% ( 32) 00:08:29.773 19862.449 - 19963.274: 96.7750% ( 54) 00:08:29.773 19963.274 - 20064.098: 97.0875% ( 25) 00:08:29.773 20064.098 - 20164.923: 97.3250% ( 19) 00:08:29.773 20164.923 - 20265.748: 97.4750% ( 12) 00:08:29.773 20265.748 - 20366.572: 97.6000% ( 10) 00:08:29.773 20366.572 - 20467.397: 97.7000% ( 8) 00:08:29.773 20467.397 - 20568.222: 97.7625% ( 5) 00:08:29.773 20568.222 - 20669.046: 97.8375% ( 6) 00:08:29.773 20669.046 - 20769.871: 97.9375% ( 8) 00:08:29.773 20769.871 - 20870.695: 98.1625% ( 18) 00:08:29.773 20870.695 - 20971.520: 98.2375% ( 6) 00:08:29.773 20971.520 - 21072.345: 98.2875% ( 4) 00:08:29.773 21072.345 - 21173.169: 98.3500% ( 5) 00:08:29.773 21173.169 - 21273.994: 98.4000% ( 4) 00:08:29.773 28835.840 - 29037.489: 98.4250% ( 2) 00:08:29.773 29037.489 - 29239.138: 98.5250% ( 8) 00:08:29.773 29239.138 - 29440.788: 98.6375% ( 9) 00:08:29.773 29440.788 - 29642.437: 98.7500% ( 9) 00:08:29.773 29642.437 - 29844.086: 98.8750% ( 10) 00:08:29.773 29844.086 - 30045.735: 98.9875% ( 9) 00:08:29.773 30045.735 - 30247.385: 99.1000% ( 9) 00:08:29.773 30247.385 - 30449.034: 99.2000% ( 8) 00:08:29.773 36498.511 - 36700.160: 99.2125% ( 1) 00:08:29.773 36700.160 - 36901.809: 99.3000% ( 7) 00:08:29.773 36901.809 - 37103.458: 99.4000% ( 8) 00:08:29.773 37103.458 - 37305.108: 99.5125% ( 9) 00:08:29.773 37305.108 - 37506.757: 99.6250% ( 9) 00:08:29.773 37506.757 - 37708.406: 99.7375% ( 9) 00:08:29.773 37708.406 - 37910.055: 99.8500% ( 9) 00:08:29.773 37910.055 - 38111.705: 99.9625% ( 9) 00:08:29.773 38111.705 - 38313.354: 100.0000% ( 3) 00:08:29.773 00:08:29.773 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:29.773 ============================================================================== 00:08:29.773 Range in us Cumulative IO count 00:08:29.773 6704.837 - 6755.249: 0.0248% ( 2) 00:08:29.773 6755.249 - 6805.662: 0.0744% ( 4) 00:08:29.773 6805.662 - 6856.074: 0.1240% ( 4) 00:08:29.773 6856.074 - 6906.486: 0.1612% ( 3) 00:08:29.773 6906.486 - 6956.898: 0.1984% ( 3) 00:08:29.773 6956.898 - 7007.311: 0.3348% ( 11) 00:08:29.773 7007.311 - 7057.723: 0.4712% ( 11) 00:08:29.773 7057.723 - 7108.135: 0.5332% ( 5) 00:08:29.773 7108.135 - 7158.548: 0.5580% ( 2) 00:08:29.773 7158.548 - 7208.960: 0.5952% ( 3) 00:08:29.773 7208.960 - 7259.372: 0.6324% ( 3) 00:08:29.773 7259.372 - 7309.785: 0.6696% ( 3) 00:08:29.773 7309.785 - 7360.197: 0.6944% ( 2) 00:08:29.773 7360.197 - 7410.609: 0.7316% ( 3) 00:08:29.773 7410.609 - 7461.022: 0.7688% ( 3) 00:08:29.773 7461.022 - 7511.434: 0.7937% ( 2) 00:08:29.773 11191.532 - 11241.945: 0.8061% ( 1) 00:08:29.773 11292.357 - 11342.769: 0.8557% ( 4) 00:08:29.773 11342.769 - 11393.182: 0.9177% ( 5) 00:08:29.773 11393.182 - 11443.594: 1.0045% ( 7) 00:08:29.773 11443.594 - 11494.006: 1.0789% ( 6) 00:08:29.773 11494.006 - 11544.418: 1.1905% ( 9) 00:08:29.773 11544.418 - 11594.831: 1.3517% ( 13) 00:08:29.773 11594.831 - 11645.243: 1.4137% ( 5) 00:08:29.773 11645.243 - 11695.655: 1.4881% ( 6) 00:08:29.773 11695.655 - 11746.068: 1.5253% ( 3) 00:08:29.773 11746.068 - 11796.480: 1.5625% ( 3) 00:08:29.773 11796.480 - 11846.892: 1.5873% ( 2) 00:08:29.773 12754.314 - 12804.726: 1.6121% ( 2) 00:08:29.773 12804.726 - 12855.138: 1.6493% ( 3) 00:08:29.773 12855.138 - 12905.551: 1.7485% ( 8) 00:08:29.773 12905.551 - 13006.375: 1.8849% ( 11) 00:08:29.773 13006.375 - 13107.200: 2.0461% ( 13) 00:08:29.773 13107.200 - 13208.025: 2.1949% ( 12) 00:08:29.773 13208.025 - 13308.849: 2.3065% ( 9) 00:08:29.773 13308.849 - 13409.674: 2.4182% ( 9) 00:08:29.773 13409.674 - 13510.498: 2.5298% ( 9) 00:08:29.773 13510.498 - 13611.323: 2.7034% ( 14) 00:08:29.773 13611.323 - 13712.148: 3.0010% ( 24) 00:08:29.773 13712.148 - 13812.972: 3.7326% ( 59) 00:08:29.773 13812.972 - 13913.797: 4.3527% ( 50) 00:08:29.773 13913.797 - 14014.622: 5.4812% ( 91) 00:08:29.773 14014.622 - 14115.446: 6.9568% ( 119) 00:08:29.773 14115.446 - 14216.271: 8.8046% ( 149) 00:08:29.773 14216.271 - 14317.095: 11.3591% ( 206) 00:08:29.773 14317.095 - 14417.920: 14.2237% ( 231) 00:08:29.773 14417.920 - 14518.745: 17.3611% ( 253) 00:08:29.773 14518.745 - 14619.569: 21.1558% ( 306) 00:08:29.773 14619.569 - 14720.394: 24.9380% ( 305) 00:08:29.773 14720.394 - 14821.218: 28.6582% ( 300) 00:08:29.773 14821.218 - 14922.043: 31.9816% ( 268) 00:08:29.773 14922.043 - 15022.868: 36.1855% ( 339) 00:08:29.773 15022.868 - 15123.692: 40.5258% ( 350) 00:08:29.773 15123.692 - 15224.517: 44.5809% ( 327) 00:08:29.773 15224.517 - 15325.342: 48.9459% ( 352) 00:08:29.773 15325.342 - 15426.166: 53.0630% ( 332) 00:08:29.773 15426.166 - 15526.991: 56.2376% ( 256) 00:08:29.773 15526.991 - 15627.815: 59.5238% ( 265) 00:08:29.773 15627.815 - 15728.640: 62.6488% ( 252) 00:08:29.773 15728.640 - 15829.465: 65.4266% ( 224) 00:08:29.773 15829.465 - 15930.289: 68.1424% ( 219) 00:08:29.773 15930.289 - 16031.114: 70.5977% ( 198) 00:08:29.773 16031.114 - 16131.938: 72.7307% ( 172) 00:08:29.773 16131.938 - 16232.763: 74.5288% ( 145) 00:08:29.773 16232.763 - 16333.588: 76.1037% ( 127) 00:08:29.773 16333.588 - 16434.412: 77.8026% ( 137) 00:08:29.773 16434.412 - 16535.237: 79.4767% ( 135) 00:08:29.773 16535.237 - 16636.062: 80.6548% ( 95) 00:08:29.773 16636.062 - 16736.886: 82.1057% ( 117) 00:08:29.773 16736.886 - 16837.711: 83.5689% ( 118) 00:08:29.773 16837.711 - 16938.535: 84.3874% ( 66) 00:08:29.773 16938.535 - 17039.360: 85.0570% ( 54) 00:08:29.773 17039.360 - 17140.185: 85.4911% ( 35) 00:08:29.773 17140.185 - 17241.009: 86.0119% ( 42) 00:08:29.773 17241.009 - 17341.834: 86.3467% ( 27) 00:08:29.773 17341.834 - 17442.658: 86.5699% ( 18) 00:08:29.773 17442.658 - 17543.483: 86.8552% ( 23) 00:08:29.773 17543.483 - 17644.308: 87.1156% ( 21) 00:08:29.773 17644.308 - 17745.132: 87.2520% ( 11) 00:08:29.773 17745.132 - 17845.957: 87.4380% ( 15) 00:08:29.773 17845.957 - 17946.782: 87.7604% ( 26) 00:08:29.773 17946.782 - 18047.606: 88.1200% ( 29) 00:08:29.773 18047.606 - 18148.431: 88.5169% ( 32) 00:08:29.773 18148.431 - 18249.255: 88.8269% ( 25) 00:08:29.773 18249.255 - 18350.080: 89.1741% ( 28) 00:08:29.773 18350.080 - 18450.905: 89.5089% ( 27) 00:08:29.773 18450.905 - 18551.729: 89.7941% ( 23) 00:08:29.773 18551.729 - 18652.554: 90.2034% ( 33) 00:08:29.773 18652.554 - 18753.378: 90.8110% ( 49) 00:08:29.773 18753.378 - 18854.203: 91.2202% ( 33) 00:08:29.773 18854.203 - 18955.028: 91.9023% ( 55) 00:08:29.773 18955.028 - 19055.852: 92.4479% ( 44) 00:08:29.773 19055.852 - 19156.677: 92.9812% ( 43) 00:08:29.773 19156.677 - 19257.502: 93.5888% ( 49) 00:08:29.773 19257.502 - 19358.326: 94.1344% ( 44) 00:08:29.773 19358.326 - 19459.151: 94.4940% ( 29) 00:08:29.773 19459.151 - 19559.975: 94.8909% ( 32) 00:08:29.773 19559.975 - 19660.800: 95.1761% ( 23) 00:08:29.773 19660.800 - 19761.625: 95.4737% ( 24) 00:08:29.773 19761.625 - 19862.449: 95.7713% ( 24) 00:08:29.773 19862.449 - 19963.274: 96.1434% ( 30) 00:08:29.773 19963.274 - 20064.098: 96.4658% ( 26) 00:08:29.773 20064.098 - 20164.923: 96.9246% ( 37) 00:08:29.773 20164.923 - 20265.748: 97.1726% ( 20) 00:08:29.773 20265.748 - 20366.572: 97.3834% ( 17) 00:08:29.773 20366.572 - 20467.397: 97.5570% ( 14) 00:08:29.773 20467.397 - 20568.222: 97.7059% ( 12) 00:08:29.773 20568.222 - 20669.046: 97.8299% ( 10) 00:08:29.773 20669.046 - 20769.871: 97.9415% ( 9) 00:08:29.773 20769.871 - 20870.695: 98.0655% ( 10) 00:08:29.773 20870.695 - 20971.520: 98.1523% ( 7) 00:08:29.773 20971.520 - 21072.345: 98.2019% ( 4) 00:08:29.773 21072.345 - 21173.169: 98.2639% ( 5) 00:08:29.773 21173.169 - 21273.994: 98.3011% ( 3) 00:08:29.773 21273.994 - 21374.818: 98.3631% ( 5) 00:08:29.773 21374.818 - 21475.643: 98.4127% ( 4) 00:08:29.773 22383.065 - 22483.889: 98.4623% ( 4) 00:08:29.773 22483.889 - 22584.714: 98.5243% ( 5) 00:08:29.773 22584.714 - 22685.538: 98.5863% ( 5) 00:08:29.773 22685.538 - 22786.363: 98.6359% ( 4) 00:08:29.773 22786.363 - 22887.188: 98.6979% ( 5) 00:08:29.773 22887.188 - 22988.012: 98.7475% ( 4) 00:08:29.773 22988.012 - 23088.837: 98.8095% ( 5) 00:08:29.773 23088.837 - 23189.662: 98.8715% ( 5) 00:08:29.773 23189.662 - 23290.486: 98.9211% ( 4) 00:08:29.773 23290.486 - 23391.311: 98.9831% ( 5) 00:08:29.773 23391.311 - 23492.135: 99.0327% ( 4) 00:08:29.773 23492.135 - 23592.960: 99.0823% ( 4) 00:08:29.773 23592.960 - 23693.785: 99.1319% ( 4) 00:08:29.773 23693.785 - 23794.609: 99.1939% ( 5) 00:08:29.773 23794.609 - 23895.434: 99.2063% ( 1) 00:08:29.773 28634.191 - 28835.840: 99.2560% ( 4) 00:08:29.773 28835.840 - 29037.489: 99.3676% ( 9) 00:08:29.773 29037.489 - 29239.138: 99.4668% ( 8) 00:08:29.773 29239.138 - 29440.788: 99.5784% ( 9) 00:08:29.773 29440.788 - 29642.437: 99.6900% ( 9) 00:08:29.773 29642.437 - 29844.086: 99.8016% ( 9) 00:08:29.773 29844.086 - 30045.735: 99.9132% ( 9) 00:08:29.773 30045.735 - 30247.385: 100.0000% ( 7) 00:08:29.773 00:08:30.035 00:31:06 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:30.035 00:08:30.035 real 0m2.524s 00:08:30.035 user 0m2.168s 00:08:30.035 sys 0m0.239s 00:08:30.035 00:31:06 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.035 00:31:06 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:30.035 ************************************ 00:08:30.035 END TEST nvme_perf 00:08:30.035 ************************************ 00:08:30.035 00:31:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:30.035 00:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.035 00:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.035 00:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.035 ************************************ 00:08:30.035 START TEST nvme_hello_world 00:08:30.035 ************************************ 00:08:30.035 00:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:30.295 Initializing NVMe Controllers 00:08:30.295 Attached to 0000:00:13.0 00:08:30.295 Namespace ID: 1 size: 1GB 00:08:30.295 Attached to 0000:00:10.0 00:08:30.295 Namespace ID: 1 size: 6GB 00:08:30.295 Attached to 0000:00:11.0 00:08:30.295 Namespace ID: 1 size: 5GB 00:08:30.295 Attached to 0000:00:12.0 00:08:30.295 Namespace ID: 1 size: 4GB 00:08:30.295 Namespace ID: 2 size: 4GB 00:08:30.295 Namespace ID: 3 size: 4GB 00:08:30.295 Initialization complete. 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 INFO: using host memory buffer for IO 00:08:30.295 Hello world! 00:08:30.295 00:08:30.295 real 0m0.235s 00:08:30.295 user 0m0.082s 00:08:30.295 sys 0m0.106s 00:08:30.295 00:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.295 00:31:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:30.295 ************************************ 00:08:30.295 END TEST nvme_hello_world 00:08:30.295 ************************************ 00:08:30.295 00:31:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:30.295 00:31:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.295 00:31:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.295 00:31:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.295 ************************************ 00:08:30.295 START TEST nvme_sgl 00:08:30.295 ************************************ 00:08:30.295 00:31:06 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:30.556 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:30.556 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:30.556 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:30.556 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:30.556 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:30.556 NVMe Readv/Writev Request test 00:08:30.556 Attached to 0000:00:13.0 00:08:30.556 Attached to 0000:00:10.0 00:08:30.556 Attached to 0000:00:11.0 00:08:30.556 Attached to 0000:00:12.0 00:08:30.556 0000:00:10.0: build_io_request_2 test passed 00:08:30.556 0000:00:10.0: build_io_request_4 test passed 00:08:30.556 0000:00:10.0: build_io_request_5 test passed 00:08:30.556 0000:00:10.0: build_io_request_6 test passed 00:08:30.556 0000:00:10.0: build_io_request_7 test passed 00:08:30.556 0000:00:10.0: build_io_request_10 test passed 00:08:30.556 0000:00:11.0: build_io_request_2 test passed 00:08:30.556 0000:00:11.0: build_io_request_4 test passed 00:08:30.556 0000:00:11.0: build_io_request_5 test passed 00:08:30.556 0000:00:11.0: build_io_request_6 test passed 00:08:30.556 0000:00:11.0: build_io_request_7 test passed 00:08:30.556 0000:00:11.0: build_io_request_10 test passed 00:08:30.556 Cleaning up... 00:08:30.556 00:08:30.556 real 0m0.284s 00:08:30.556 user 0m0.151s 00:08:30.556 sys 0m0.087s 00:08:30.556 ************************************ 00:08:30.556 END TEST nvme_sgl 00:08:30.556 ************************************ 00:08:30.556 00:31:07 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.556 00:31:07 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:30.556 00:31:07 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:30.556 00:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.556 00:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.556 00:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.556 ************************************ 00:08:30.556 START TEST nvme_e2edp 00:08:30.556 ************************************ 00:08:30.556 00:31:07 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:30.818 NVMe Write/Read with End-to-End data protection test 00:08:30.818 Attached to 0000:00:13.0 00:08:30.818 Attached to 0000:00:10.0 00:08:30.818 Attached to 0000:00:11.0 00:08:30.818 Attached to 0000:00:12.0 00:08:30.818 Cleaning up... 00:08:30.818 00:08:30.818 real 0m0.225s 00:08:30.818 user 0m0.072s 00:08:30.818 sys 0m0.104s 00:08:30.818 00:31:07 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.818 00:31:07 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:30.818 ************************************ 00:08:30.818 END TEST nvme_e2edp 00:08:30.818 ************************************ 00:08:30.818 00:31:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:30.818 00:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.818 00:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.818 00:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.078 ************************************ 00:08:31.078 START TEST nvme_reserve 00:08:31.078 ************************************ 00:08:31.078 00:31:07 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:31.078 ===================================================== 00:08:31.078 NVMe Controller at PCI bus 0, device 19, function 0 00:08:31.078 ===================================================== 00:08:31.078 Reservations: Not Supported 00:08:31.078 ===================================================== 00:08:31.078 NVMe Controller at PCI bus 0, device 16, function 0 00:08:31.078 ===================================================== 00:08:31.078 Reservations: Not Supported 00:08:31.078 ===================================================== 00:08:31.078 NVMe Controller at PCI bus 0, device 17, function 0 00:08:31.078 ===================================================== 00:08:31.078 Reservations: Not Supported 00:08:31.078 ===================================================== 00:08:31.078 NVMe Controller at PCI bus 0, device 18, function 0 00:08:31.078 ===================================================== 00:08:31.078 Reservations: Not Supported 00:08:31.078 Reservation test passed 00:08:31.078 00:08:31.078 real 0m0.228s 00:08:31.078 user 0m0.076s 00:08:31.078 sys 0m0.100s 00:08:31.078 ************************************ 00:08:31.078 END TEST nvme_reserve 00:08:31.078 ************************************ 00:08:31.078 00:31:07 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.078 00:31:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:31.339 00:31:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:31.339 00:31:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.339 00:31:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.339 00:31:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.339 ************************************ 00:08:31.339 START TEST nvme_err_injection 00:08:31.339 ************************************ 00:08:31.339 00:31:07 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:31.599 NVMe Error Injection test 00:08:31.599 Attached to 0000:00:13.0 00:08:31.599 Attached to 0000:00:10.0 00:08:31.599 Attached to 0000:00:11.0 00:08:31.599 Attached to 0000:00:12.0 00:08:31.599 0000:00:13.0: get features failed as expected 00:08:31.599 0000:00:10.0: get features failed as expected 00:08:31.599 0000:00:11.0: get features failed as expected 00:08:31.599 0000:00:12.0: get features failed as expected 00:08:31.599 0000:00:13.0: get features successfully as expected 00:08:31.599 0000:00:10.0: get features successfully as expected 00:08:31.599 0000:00:11.0: get features successfully as expected 00:08:31.599 0000:00:12.0: get features successfully as expected 00:08:31.599 0000:00:13.0: read failed as expected 00:08:31.599 0000:00:10.0: read failed as expected 00:08:31.599 0000:00:11.0: read failed as expected 00:08:31.599 0000:00:12.0: read failed as expected 00:08:31.599 0000:00:13.0: read successfully as expected 00:08:31.599 0000:00:10.0: read successfully as expected 00:08:31.599 0000:00:11.0: read successfully as expected 00:08:31.599 0000:00:12.0: read successfully as expected 00:08:31.599 Cleaning up... 00:08:31.599 00:08:31.599 real 0m0.234s 00:08:31.599 user 0m0.081s 00:08:31.599 sys 0m0.103s 00:08:31.599 00:31:08 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.599 ************************************ 00:08:31.599 END TEST nvme_err_injection 00:08:31.599 ************************************ 00:08:31.599 00:31:08 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:31.599 00:31:08 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:31.599 00:31:08 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:31.599 00:31:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.599 00:31:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.599 ************************************ 00:08:31.599 START TEST nvme_overhead 00:08:31.599 ************************************ 00:08:31.599 00:31:08 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:32.987 Initializing NVMe Controllers 00:08:32.987 Attached to 0000:00:13.0 00:08:32.987 Attached to 0000:00:10.0 00:08:32.987 Attached to 0000:00:11.0 00:08:32.987 Attached to 0000:00:12.0 00:08:32.987 Initialization complete. Launching workers. 00:08:32.987 submit (in ns) avg, min, max = 17693.1, 13890.8, 276248.5 00:08:32.987 complete (in ns) avg, min, max = 10876.2, 8117.7, 364522.3 00:08:32.987 00:08:32.987 Submit histogram 00:08:32.987 ================ 00:08:32.987 Range in us Cumulative Count 00:08:32.987 13.883 - 13.982: 0.0390% ( 1) 00:08:32.987 13.982 - 14.080: 0.0779% ( 1) 00:08:32.987 14.474 - 14.572: 0.1169% ( 1) 00:08:32.987 14.671 - 14.769: 0.1948% ( 2) 00:08:32.987 14.769 - 14.868: 0.3896% ( 5) 00:08:32.987 14.868 - 14.966: 0.7012% ( 8) 00:08:32.987 14.966 - 15.065: 1.1297% ( 11) 00:08:32.987 15.065 - 15.163: 1.4803% ( 9) 00:08:32.987 15.163 - 15.262: 2.4542% ( 25) 00:08:32.987 15.262 - 15.360: 3.7398% ( 33) 00:08:32.987 15.360 - 15.458: 4.9085% ( 30) 00:08:32.987 15.458 - 15.557: 6.9731% ( 53) 00:08:32.987 15.557 - 15.655: 10.3233% ( 86) 00:08:32.987 15.655 - 15.754: 15.3097% ( 128) 00:08:32.987 15.754 - 15.852: 20.7635% ( 140) 00:08:32.987 15.852 - 15.951: 27.1523% ( 164) 00:08:32.987 15.951 - 16.049: 34.7487% ( 195) 00:08:32.987 16.049 - 16.148: 42.1504% ( 190) 00:08:32.987 16.148 - 16.246: 49.3962% ( 186) 00:08:32.987 16.246 - 16.345: 55.4733% ( 156) 00:08:32.987 16.345 - 16.443: 60.9272% ( 140) 00:08:32.987 16.443 - 16.542: 65.3292% ( 113) 00:08:32.987 16.542 - 16.640: 69.4975% ( 107) 00:08:32.987 16.640 - 16.738: 71.8738% ( 61) 00:08:32.987 16.738 - 16.837: 73.4320% ( 40) 00:08:32.987 16.837 - 16.935: 74.6786% ( 32) 00:08:32.987 16.935 - 17.034: 75.6136% ( 24) 00:08:32.987 17.034 - 17.132: 76.1589% ( 14) 00:08:32.987 17.132 - 17.231: 76.7822% ( 16) 00:08:32.987 17.231 - 17.329: 77.6782% ( 23) 00:08:32.987 17.329 - 17.428: 77.8730% ( 5) 00:08:32.987 17.428 - 17.526: 78.1067% ( 6) 00:08:32.987 17.526 - 17.625: 78.4573% ( 9) 00:08:32.987 17.625 - 17.723: 78.8079% ( 9) 00:08:32.987 17.723 - 17.822: 79.2754% ( 12) 00:08:32.987 17.822 - 17.920: 79.7039% ( 11) 00:08:32.987 17.920 - 18.018: 80.0156% ( 8) 00:08:32.987 18.018 - 18.117: 80.4051% ( 10) 00:08:32.987 18.117 - 18.215: 81.0284% ( 16) 00:08:32.987 18.215 - 18.314: 81.4959% ( 12) 00:08:32.987 18.314 - 18.412: 82.0023% ( 13) 00:08:32.987 18.412 - 18.511: 82.7425% ( 19) 00:08:32.987 18.511 - 18.609: 83.2489% ( 13) 00:08:32.987 18.609 - 18.708: 83.7164% ( 12) 00:08:32.987 18.708 - 18.806: 84.0670% ( 9) 00:08:32.987 18.806 - 18.905: 84.7682% ( 18) 00:08:32.987 18.905 - 19.003: 85.3136% ( 14) 00:08:32.987 19.003 - 19.102: 85.8200% ( 13) 00:08:32.987 19.102 - 19.200: 86.5991% ( 20) 00:08:32.987 19.200 - 19.298: 86.6771% ( 2) 00:08:32.987 19.298 - 19.397: 87.1056% ( 11) 00:08:32.987 19.397 - 19.495: 87.4172% ( 8) 00:08:32.987 19.495 - 19.594: 87.6510% ( 6) 00:08:32.987 19.594 - 19.692: 88.0795% ( 11) 00:08:32.987 19.692 - 19.791: 88.4690% ( 10) 00:08:32.987 19.791 - 19.889: 88.7807% ( 8) 00:08:32.987 19.889 - 19.988: 89.1702% ( 10) 00:08:32.987 19.988 - 20.086: 89.2871% ( 3) 00:08:32.987 20.086 - 20.185: 89.5988% ( 8) 00:08:32.987 20.185 - 20.283: 89.9104% ( 8) 00:08:32.987 20.283 - 20.382: 90.0662% ( 4) 00:08:32.987 20.382 - 20.480: 90.2610% ( 5) 00:08:32.987 20.480 - 20.578: 90.4558% ( 5) 00:08:32.987 20.578 - 20.677: 90.6116% ( 4) 00:08:32.987 20.677 - 20.775: 90.8453% ( 6) 00:08:32.987 20.775 - 20.874: 91.0012% ( 4) 00:08:32.987 20.874 - 20.972: 91.2739% ( 7) 00:08:32.987 20.972 - 21.071: 91.5076% ( 6) 00:08:32.987 21.071 - 21.169: 91.6245% ( 3) 00:08:32.987 21.169 - 21.268: 91.9361% ( 8) 00:08:32.987 21.268 - 21.366: 92.1309% ( 5) 00:08:32.987 21.366 - 21.465: 92.4036% ( 7) 00:08:32.987 21.465 - 21.563: 92.5594% ( 4) 00:08:32.987 21.563 - 21.662: 92.7542% ( 5) 00:08:32.987 21.662 - 21.760: 93.0658% ( 8) 00:08:32.987 21.760 - 21.858: 93.3775% ( 8) 00:08:32.987 21.858 - 21.957: 93.6112% ( 6) 00:08:32.987 21.957 - 22.055: 93.7281% ( 3) 00:08:32.987 22.055 - 22.154: 93.8450% ( 3) 00:08:32.987 22.154 - 22.252: 93.9618% ( 3) 00:08:32.987 22.252 - 22.351: 94.1566% ( 5) 00:08:32.987 22.351 - 22.449: 94.2735% ( 3) 00:08:32.987 22.449 - 22.548: 94.3903% ( 3) 00:08:32.987 22.548 - 22.646: 94.5072% ( 3) 00:08:32.987 22.646 - 22.745: 94.6241% ( 3) 00:08:32.987 22.745 - 22.843: 94.7020% ( 2) 00:08:32.987 22.843 - 22.942: 94.8189% ( 3) 00:08:32.987 23.040 - 23.138: 94.9747% ( 4) 00:08:32.987 23.138 - 23.237: 95.0915% ( 3) 00:08:32.987 23.237 - 23.335: 95.2084% ( 3) 00:08:32.987 23.335 - 23.434: 95.3642% ( 4) 00:08:32.987 23.434 - 23.532: 95.4422% ( 2) 00:08:32.987 23.532 - 23.631: 95.5590% ( 3) 00:08:32.987 23.631 - 23.729: 95.6759% ( 3) 00:08:32.987 23.729 - 23.828: 95.8317% ( 4) 00:08:32.987 23.828 - 23.926: 95.9486% ( 3) 00:08:32.987 23.926 - 24.025: 96.0654% ( 3) 00:08:32.987 24.025 - 24.123: 96.1823% ( 3) 00:08:32.987 24.123 - 24.222: 96.3381% ( 4) 00:08:32.987 24.222 - 24.320: 96.3771% ( 1) 00:08:32.987 24.320 - 24.418: 96.5719% ( 5) 00:08:32.987 24.517 - 24.615: 96.6498% ( 2) 00:08:32.987 24.714 - 24.812: 96.7277% ( 2) 00:08:32.987 24.812 - 24.911: 96.8835% ( 4) 00:08:32.987 24.911 - 25.009: 96.9225% ( 1) 00:08:32.987 25.009 - 25.108: 97.1562% ( 6) 00:08:32.987 25.108 - 25.206: 97.1952% ( 1) 00:08:32.987 25.403 - 25.600: 97.3120% ( 3) 00:08:32.987 25.600 - 25.797: 97.3899% ( 2) 00:08:32.987 25.797 - 25.994: 97.5847% ( 5) 00:08:32.987 25.994 - 26.191: 97.7016% ( 3) 00:08:32.987 26.191 - 26.388: 97.7795% ( 2) 00:08:32.987 26.388 - 26.585: 97.8964% ( 3) 00:08:32.987 26.782 - 26.978: 97.9743% ( 2) 00:08:32.987 27.175 - 27.372: 98.0132% ( 1) 00:08:32.987 27.372 - 27.569: 98.0522% ( 1) 00:08:32.988 28.160 - 28.357: 98.0912% ( 1) 00:08:32.988 28.554 - 28.751: 98.1301% ( 1) 00:08:32.988 28.948 - 29.145: 98.1691% ( 1) 00:08:32.988 29.735 - 29.932: 98.2080% ( 1) 00:08:32.988 29.932 - 30.129: 98.2470% ( 1) 00:08:32.988 30.523 - 30.720: 98.2859% ( 1) 00:08:32.988 33.477 - 33.674: 98.3249% ( 1) 00:08:32.988 33.674 - 33.871: 98.4028% ( 2) 00:08:32.988 33.871 - 34.068: 98.5586% ( 4) 00:08:32.988 34.068 - 34.265: 98.6755% ( 3) 00:08:32.988 34.265 - 34.462: 98.9482% ( 7) 00:08:32.988 34.462 - 34.658: 99.0261% ( 2) 00:08:32.988 34.658 - 34.855: 99.0651% ( 1) 00:08:32.988 35.052 - 35.249: 99.1040% ( 1) 00:08:32.988 35.249 - 35.446: 99.1819% ( 2) 00:08:32.988 36.037 - 36.234: 99.2209% ( 1) 00:08:32.988 36.234 - 36.431: 99.2598% ( 1) 00:08:32.988 37.218 - 37.415: 99.2988% ( 1) 00:08:32.988 37.809 - 38.006: 99.3377% ( 1) 00:08:32.988 41.551 - 41.748: 99.3767% ( 1) 00:08:32.988 42.732 - 42.929: 99.4157% ( 1) 00:08:32.988 43.520 - 43.717: 99.4546% ( 1) 00:08:32.988 49.034 - 49.231: 99.4936% ( 1) 00:08:32.988 51.200 - 51.594: 99.5325% ( 1) 00:08:32.988 52.775 - 53.169: 99.5715% ( 1) 00:08:32.988 56.714 - 57.108: 99.6104% ( 1) 00:08:32.988 63.015 - 63.409: 99.6494% ( 1) 00:08:32.988 64.197 - 64.591: 99.6884% ( 1) 00:08:32.988 68.529 - 68.923: 99.7273% ( 1) 00:08:32.988 72.862 - 73.255: 99.7663% ( 1) 00:08:32.988 96.492 - 96.886: 99.8052% ( 1) 00:08:32.988 111.852 - 112.640: 99.8442% ( 1) 00:08:32.988 123.668 - 124.455: 99.8831% ( 1) 00:08:32.988 127.606 - 128.394: 99.9221% ( 1) 00:08:32.988 192.985 - 193.772: 99.9610% ( 1) 00:08:32.988 275.692 - 277.268: 100.0000% ( 1) 00:08:32.988 00:08:32.988 Complete histogram 00:08:32.988 ================== 00:08:32.988 Range in us Cumulative Count 00:08:32.988 8.074 - 8.123: 0.0390% ( 1) 00:08:32.988 8.123 - 8.172: 0.1169% ( 2) 00:08:32.988 8.172 - 8.222: 0.8960% ( 20) 00:08:32.988 8.222 - 8.271: 1.5972% ( 18) 00:08:32.988 8.271 - 8.320: 2.4932% ( 23) 00:08:32.988 8.320 - 8.369: 3.2333% ( 19) 00:08:32.988 8.369 - 8.418: 4.1683% ( 24) 00:08:32.988 8.418 - 8.468: 5.0643% ( 23) 00:08:32.988 8.468 - 8.517: 5.9603% ( 23) 00:08:32.988 8.517 - 8.566: 6.5836% ( 16) 00:08:32.988 8.566 - 8.615: 7.4016% ( 21) 00:08:32.988 8.615 - 8.665: 7.7912% ( 10) 00:08:32.988 8.665 - 8.714: 8.2197% ( 11) 00:08:32.988 8.714 - 8.763: 8.5703% ( 9) 00:08:32.988 8.763 - 8.812: 8.8041% ( 6) 00:08:32.988 8.812 - 8.862: 9.0767% ( 7) 00:08:32.988 8.862 - 8.911: 9.1547% ( 2) 00:08:32.988 8.911 - 8.960: 9.1936% ( 1) 00:08:32.988 8.960 - 9.009: 9.2326% ( 1) 00:08:32.988 9.009 - 9.058: 9.3105% ( 2) 00:08:32.988 9.058 - 9.108: 9.3884% ( 2) 00:08:32.988 9.157 - 9.206: 9.4273% ( 1) 00:08:32.988 9.255 - 9.305: 9.5053% ( 2) 00:08:32.988 9.354 - 9.403: 9.5442% ( 1) 00:08:32.988 9.649 - 9.698: 9.5832% ( 1) 00:08:32.988 9.698 - 9.748: 9.7000% ( 3) 00:08:32.988 9.748 - 9.797: 9.7780% ( 2) 00:08:32.988 9.797 - 9.846: 10.3623% ( 15) 00:08:32.988 9.846 - 9.895: 11.9205% ( 40) 00:08:32.988 9.895 - 9.945: 14.2968% ( 61) 00:08:32.988 9.945 - 9.994: 17.9587% ( 94) 00:08:32.988 9.994 - 10.043: 23.1399% ( 133) 00:08:32.988 10.043 - 10.092: 30.0740% ( 178) 00:08:32.988 10.092 - 10.142: 36.8913% ( 175) 00:08:32.988 10.142 - 10.191: 44.2540% ( 189) 00:08:32.988 10.191 - 10.240: 51.4608% ( 185) 00:08:32.988 10.240 - 10.289: 57.9665% ( 167) 00:08:32.988 10.289 - 10.338: 64.4721% ( 167) 00:08:32.988 10.338 - 10.388: 70.4324% ( 153) 00:08:32.988 10.388 - 10.437: 75.0292% ( 118) 00:08:32.988 10.437 - 10.486: 78.9248% ( 100) 00:08:32.988 10.486 - 10.535: 81.9244% ( 77) 00:08:32.988 10.535 - 10.585: 84.7293% ( 72) 00:08:32.988 10.585 - 10.634: 87.0666% ( 60) 00:08:32.988 10.634 - 10.683: 88.7028% ( 42) 00:08:32.988 10.683 - 10.732: 89.9883% ( 33) 00:08:32.988 10.732 - 10.782: 90.7285% ( 19) 00:08:32.988 10.782 - 10.831: 91.2349% ( 13) 00:08:32.988 10.831 - 10.880: 91.6245% ( 10) 00:08:32.988 10.880 - 10.929: 92.1698% ( 14) 00:08:32.988 10.929 - 10.978: 92.6373% ( 12) 00:08:32.988 10.978 - 11.028: 92.9100% ( 7) 00:08:32.988 11.028 - 11.077: 93.1437% ( 6) 00:08:32.988 11.077 - 11.126: 93.3385% ( 5) 00:08:32.988 11.126 - 11.175: 93.4944% ( 4) 00:08:32.988 11.175 - 11.225: 93.6112% ( 3) 00:08:32.988 11.225 - 11.274: 93.6891% ( 2) 00:08:32.988 11.323 - 11.372: 93.9229% ( 6) 00:08:32.988 11.372 - 11.422: 94.0008% ( 2) 00:08:32.988 11.422 - 11.471: 94.0787% ( 2) 00:08:32.988 11.471 - 11.520: 94.1956% ( 3) 00:08:32.988 11.520 - 11.569: 94.3124% ( 3) 00:08:32.988 11.569 - 11.618: 94.4293% ( 3) 00:08:32.988 11.618 - 11.668: 94.5851% ( 4) 00:08:32.988 11.668 - 11.717: 94.7020% ( 3) 00:08:32.988 11.717 - 11.766: 94.7799% ( 2) 00:08:32.988 11.766 - 11.815: 94.8968% ( 3) 00:08:32.988 11.815 - 11.865: 95.1305% ( 6) 00:08:32.988 11.865 - 11.914: 95.2474% ( 3) 00:08:32.988 11.914 - 11.963: 95.4422% ( 5) 00:08:32.988 11.963 - 12.012: 95.5980% ( 4) 00:08:32.988 12.062 - 12.111: 95.7148% ( 3) 00:08:32.988 12.111 - 12.160: 95.8707% ( 4) 00:08:32.988 12.160 - 12.209: 95.9486% ( 2) 00:08:32.988 12.209 - 12.258: 95.9875% ( 1) 00:08:32.988 12.258 - 12.308: 96.0265% ( 1) 00:08:32.988 12.357 - 12.406: 96.1434% ( 3) 00:08:32.988 12.505 - 12.554: 96.1823% ( 1) 00:08:32.988 12.603 - 12.702: 96.2992% ( 3) 00:08:32.988 12.997 - 13.095: 96.3381% ( 1) 00:08:32.988 13.095 - 13.194: 96.3771% ( 1) 00:08:32.988 13.292 - 13.391: 96.4160% ( 1) 00:08:32.988 13.785 - 13.883: 96.4550% ( 1) 00:08:32.988 13.982 - 14.080: 96.4940% ( 1) 00:08:32.988 14.277 - 14.375: 96.5329% ( 1) 00:08:32.988 14.375 - 14.474: 96.5719% ( 1) 00:08:32.988 14.868 - 14.966: 96.6108% ( 1) 00:08:32.988 15.163 - 15.262: 96.6887% ( 2) 00:08:32.988 15.458 - 15.557: 96.7277% ( 1) 00:08:32.988 15.557 - 15.655: 96.7667% ( 1) 00:08:32.988 15.852 - 15.951: 96.8056% ( 1) 00:08:32.988 16.345 - 16.443: 96.8446% ( 1) 00:08:32.988 16.443 - 16.542: 96.8835% ( 1) 00:08:32.988 16.542 - 16.640: 96.9614% ( 2) 00:08:32.988 17.329 - 17.428: 97.0783% ( 3) 00:08:32.988 17.428 - 17.526: 97.1173% ( 1) 00:08:32.988 17.526 - 17.625: 97.1562% ( 1) 00:08:32.988 17.822 - 17.920: 97.2341% ( 2) 00:08:32.988 18.018 - 18.117: 97.3120% ( 2) 00:08:32.988 18.215 - 18.314: 97.3899% ( 2) 00:08:32.988 18.314 - 18.412: 97.4289% ( 1) 00:08:32.988 18.412 - 18.511: 97.5847% ( 4) 00:08:32.988 18.511 - 18.609: 97.6237% ( 1) 00:08:32.988 18.609 - 18.708: 97.8185% ( 5) 00:08:32.988 18.708 - 18.806: 97.8574% ( 1) 00:08:32.988 18.905 - 19.003: 97.8964% ( 1) 00:08:32.988 19.200 - 19.298: 98.0132% ( 3) 00:08:32.988 19.397 - 19.495: 98.0522% ( 1) 00:08:32.988 19.692 - 19.791: 98.0912% ( 1) 00:08:32.988 19.791 - 19.889: 98.1301% ( 1) 00:08:32.988 19.988 - 20.086: 98.2470% ( 3) 00:08:32.988 20.677 - 20.775: 98.2859% ( 1) 00:08:32.988 21.858 - 21.957: 98.3249% ( 1) 00:08:32.988 22.351 - 22.449: 98.3638% ( 1) 00:08:32.988 23.040 - 23.138: 98.4028% ( 1) 00:08:32.988 23.335 - 23.434: 98.4418% ( 1) 00:08:32.988 23.828 - 23.926: 98.4807% ( 1) 00:08:32.988 24.320 - 24.418: 98.5197% ( 1) 00:08:32.988 24.517 - 24.615: 98.5976% ( 2) 00:08:32.988 24.615 - 24.714: 98.7534% ( 4) 00:08:32.988 24.714 - 24.812: 98.7924% ( 1) 00:08:32.988 24.812 - 24.911: 98.8703% ( 2) 00:08:32.988 24.911 - 25.009: 98.9092% ( 1) 00:08:32.988 25.009 - 25.108: 99.0651% ( 4) 00:08:32.988 25.108 - 25.206: 99.1040% ( 1) 00:08:32.988 25.206 - 25.403: 99.1819% ( 2) 00:08:32.988 25.403 - 25.600: 99.2209% ( 1) 00:08:32.988 25.600 - 25.797: 99.2598% ( 1) 00:08:32.988 26.191 - 26.388: 99.2988% ( 1) 00:08:32.988 26.782 - 26.978: 99.3377% ( 1) 00:08:32.988 26.978 - 27.175: 99.4546% ( 3) 00:08:32.988 27.175 - 27.372: 99.4936% ( 1) 00:08:32.988 27.569 - 27.766: 99.5325% ( 1) 00:08:32.988 27.963 - 28.160: 99.5715% ( 1) 00:08:32.988 28.948 - 29.145: 99.6104% ( 1) 00:08:32.988 31.311 - 31.508: 99.6494% ( 1) 00:08:32.988 32.689 - 32.886: 99.6884% ( 1) 00:08:32.988 34.265 - 34.462: 99.7273% ( 1) 00:08:32.988 34.462 - 34.658: 99.7663% ( 1) 00:08:32.988 41.748 - 41.945: 99.8052% ( 1) 00:08:32.988 94.129 - 94.523: 99.8442% ( 1) 00:08:32.988 107.126 - 107.914: 99.8831% ( 1) 00:08:32.988 179.594 - 180.382: 99.9221% ( 1) 00:08:32.988 186.683 - 187.471: 99.9610% ( 1) 00:08:32.988 363.914 - 365.489: 100.0000% ( 1) 00:08:32.988 00:08:32.988 00:08:32.988 real 0m1.225s 00:08:32.988 user 0m1.055s 00:08:32.988 sys 0m0.120s 00:08:32.989 00:31:09 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.989 ************************************ 00:08:32.989 END TEST nvme_overhead 00:08:32.989 ************************************ 00:08:32.989 00:31:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:32.989 00:31:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:32.989 00:31:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:32.989 00:31:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.989 00:31:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.989 ************************************ 00:08:32.989 START TEST nvme_arbitration 00:08:32.989 ************************************ 00:08:32.989 00:31:09 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:36.314 Initializing NVMe Controllers 00:08:36.314 Attached to 0000:00:13.0 00:08:36.314 Attached to 0000:00:10.0 00:08:36.314 Attached to 0000:00:11.0 00:08:36.314 Attached to 0000:00:12.0 00:08:36.314 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:36.314 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:36.314 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:36.314 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:36.314 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:36.314 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:36.314 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:36.314 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:36.314 Initialization complete. Launching workers. 00:08:36.314 Starting thread on core 1 with urgent priority queue 00:08:36.314 Starting thread on core 2 with urgent priority queue 00:08:36.314 Starting thread on core 3 with urgent priority queue 00:08:36.314 Starting thread on core 0 with urgent priority queue 00:08:36.314 QEMU NVMe Ctrl (12343 ) core 0: 3584.00 IO/s 27.90 secs/100000 ios 00:08:36.314 QEMU NVMe Ctrl (12342 ) core 0: 3584.00 IO/s 27.90 secs/100000 ios 00:08:36.314 QEMU NVMe Ctrl (12340 ) core 1: 3520.00 IO/s 28.41 secs/100000 ios 00:08:36.314 QEMU NVMe Ctrl (12342 ) core 1: 3520.00 IO/s 28.41 secs/100000 ios 00:08:36.314 QEMU NVMe Ctrl (12341 ) core 2: 3264.00 IO/s 30.64 secs/100000 ios 00:08:36.314 QEMU NVMe Ctrl (12342 ) core 3: 3008.00 IO/s 33.24 secs/100000 ios 00:08:36.314 ======================================================== 00:08:36.314 00:08:36.314 00:08:36.314 real 0m3.277s 00:08:36.314 user 0m9.038s 00:08:36.314 sys 0m0.142s 00:08:36.314 00:31:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.314 00:31:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:36.314 ************************************ 00:08:36.314 END TEST nvme_arbitration 00:08:36.314 ************************************ 00:08:36.314 00:31:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:36.314 00:31:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.314 00:31:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.314 00:31:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.314 ************************************ 00:08:36.314 START TEST nvme_single_aen 00:08:36.314 ************************************ 00:08:36.314 00:31:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:36.314 Asynchronous Event Request test 00:08:36.314 Attached to 0000:00:13.0 00:08:36.314 Attached to 0000:00:10.0 00:08:36.314 Attached to 0000:00:11.0 00:08:36.314 Attached to 0000:00:12.0 00:08:36.314 Reset controller to setup AER completions for this process 00:08:36.314 Registering asynchronous event callbacks... 00:08:36.314 Getting orig temperature thresholds of all controllers 00:08:36.314 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.314 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.314 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.314 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:36.314 Setting all controllers temperature threshold low to trigger AER 00:08:36.314 Waiting for all controllers temperature threshold to be set lower 00:08:36.314 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.314 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:36.314 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.314 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:36.314 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.314 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:36.314 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:36.314 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:36.314 Waiting for all controllers to trigger AER and reset threshold 00:08:36.314 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.314 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.314 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.314 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:36.314 Cleaning up... 00:08:36.314 00:08:36.314 real 0m0.251s 00:08:36.314 user 0m0.086s 00:08:36.314 sys 0m0.114s 00:08:36.314 00:31:13 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.314 00:31:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:36.314 ************************************ 00:08:36.314 END TEST nvme_single_aen 00:08:36.314 ************************************ 00:08:36.574 00:31:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:36.574 00:31:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.574 00:31:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.574 00:31:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.574 ************************************ 00:08:36.574 START TEST nvme_doorbell_aers 00:08:36.574 ************************************ 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:36.574 00:31:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:36.834 [2024-11-27 00:31:13.442327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:08:46.850 Executing: test_write_invalid_db 00:08:46.850 Waiting for AER completion... 00:08:46.850 Failure: test_write_invalid_db 00:08:46.850 00:08:46.850 Executing: test_invalid_db_write_overflow_sq 00:08:46.850 Waiting for AER completion... 00:08:46.850 Failure: test_invalid_db_write_overflow_sq 00:08:46.850 00:08:46.850 Executing: test_invalid_db_write_overflow_cq 00:08:46.850 Waiting for AER completion... 00:08:46.850 Failure: test_invalid_db_write_overflow_cq 00:08:46.850 00:08:46.850 00:31:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:46.850 00:31:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:46.850 [2024-11-27 00:31:23.433495] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:08:56.836 Executing: test_write_invalid_db 00:08:56.836 Waiting for AER completion... 00:08:56.836 Failure: test_write_invalid_db 00:08:56.836 00:08:56.836 Executing: test_invalid_db_write_overflow_sq 00:08:56.836 Waiting for AER completion... 00:08:56.836 Failure: test_invalid_db_write_overflow_sq 00:08:56.836 00:08:56.836 Executing: test_invalid_db_write_overflow_cq 00:08:56.836 Waiting for AER completion... 00:08:56.836 Failure: test_invalid_db_write_overflow_cq 00:08:56.836 00:08:56.836 00:31:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:56.836 00:31:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:56.836 [2024-11-27 00:31:33.472188] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:06.872 Executing: test_write_invalid_db 00:09:06.872 Waiting for AER completion... 00:09:06.872 Failure: test_write_invalid_db 00:09:06.872 00:09:06.872 Executing: test_invalid_db_write_overflow_sq 00:09:06.872 Waiting for AER completion... 00:09:06.872 Failure: test_invalid_db_write_overflow_sq 00:09:06.872 00:09:06.872 Executing: test_invalid_db_write_overflow_cq 00:09:06.872 Waiting for AER completion... 00:09:06.872 Failure: test_invalid_db_write_overflow_cq 00:09:06.872 00:09:06.872 00:31:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:06.872 00:31:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:06.872 [2024-11-27 00:31:43.503031] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.842 Executing: test_write_invalid_db 00:09:16.842 Waiting for AER completion... 00:09:16.842 Failure: test_write_invalid_db 00:09:16.842 00:09:16.842 Executing: test_invalid_db_write_overflow_sq 00:09:16.842 Waiting for AER completion... 00:09:16.842 Failure: test_invalid_db_write_overflow_sq 00:09:16.842 00:09:16.842 Executing: test_invalid_db_write_overflow_cq 00:09:16.842 Waiting for AER completion... 00:09:16.842 Failure: test_invalid_db_write_overflow_cq 00:09:16.842 00:09:16.842 00:09:16.842 real 0m40.191s 00:09:16.842 user 0m34.269s 00:09:16.842 sys 0m5.568s 00:09:16.842 00:31:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.842 00:31:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:16.842 ************************************ 00:09:16.842 END TEST nvme_doorbell_aers 00:09:16.842 ************************************ 00:09:16.842 00:31:53 nvme -- nvme/nvme.sh@97 -- # uname 00:09:16.843 00:31:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:16.843 00:31:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:16.843 00:31:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:16.843 00:31:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.843 00:31:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.843 ************************************ 00:09:16.843 START TEST nvme_multi_aen 00:09:16.843 ************************************ 00:09:16.843 00:31:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:16.843 [2024-11-27 00:31:53.539099] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.539153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.539165] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.540501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.540528] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.540536] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.541666] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.541688] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.541696] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.542842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.542875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 [2024-11-27 00:31:53.542883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:16.843 Child process pid: 75717 00:09:17.102 [Child] Asynchronous Event Request test 00:09:17.102 [Child] Attached to 0000:00:13.0 00:09:17.102 [Child] Attached to 0000:00:10.0 00:09:17.102 [Child] Attached to 0000:00:11.0 00:09:17.102 [Child] Attached to 0000:00:12.0 00:09:17.102 [Child] Registering asynchronous event callbacks... 00:09:17.102 [Child] Getting orig temperature thresholds of all controllers 00:09:17.102 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:17.102 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 [Child] Cleaning up... 00:09:17.102 Asynchronous Event Request test 00:09:17.102 Attached to 0000:00:13.0 00:09:17.102 Attached to 0000:00:10.0 00:09:17.102 Attached to 0000:00:11.0 00:09:17.102 Attached to 0000:00:12.0 00:09:17.102 Reset controller to setup AER completions for this process 00:09:17.102 Registering asynchronous event callbacks... 00:09:17.102 Getting orig temperature thresholds of all controllers 00:09:17.102 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:17.102 Setting all controllers temperature threshold low to trigger AER 00:09:17.102 Waiting for all controllers temperature threshold to be set lower 00:09:17.102 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:17.102 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:17.102 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:17.102 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:17.102 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:17.102 Waiting for all controllers to trigger AER and reset threshold 00:09:17.102 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:17.102 Cleaning up... 00:09:17.102 00:09:17.102 real 0m0.407s 00:09:17.102 user 0m0.135s 00:09:17.102 sys 0m0.164s 00:09:17.102 00:31:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.102 00:31:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:17.102 ************************************ 00:09:17.102 END TEST nvme_multi_aen 00:09:17.102 ************************************ 00:09:17.102 00:31:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:17.102 00:31:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:17.102 00:31:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.102 00:31:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.102 ************************************ 00:09:17.102 START TEST nvme_startup 00:09:17.102 ************************************ 00:09:17.102 00:31:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:17.361 Initializing NVMe Controllers 00:09:17.361 Attached to 0000:00:13.0 00:09:17.361 Attached to 0000:00:10.0 00:09:17.361 Attached to 0000:00:11.0 00:09:17.361 Attached to 0000:00:12.0 00:09:17.361 Initialization complete. 00:09:17.361 Time used:132874.719 (us). 00:09:17.361 00:09:17.361 real 0m0.192s 00:09:17.361 user 0m0.053s 00:09:17.361 sys 0m0.096s 00:09:17.361 00:31:54 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.361 00:31:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:17.361 ************************************ 00:09:17.361 END TEST nvme_startup 00:09:17.361 ************************************ 00:09:17.361 00:31:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:17.361 00:31:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.361 00:31:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.361 00:31:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.361 ************************************ 00:09:17.361 START TEST nvme_multi_secondary 00:09:17.361 ************************************ 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75773 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75774 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:17.361 00:31:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:20.645 Initializing NVMe Controllers 00:09:20.645 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.645 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.645 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.645 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.645 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:20.645 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:20.645 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:20.645 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:20.645 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:20.645 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:20.645 Initialization complete. Launching workers. 00:09:20.645 ======================================================== 00:09:20.645 Latency(us) 00:09:20.645 Device Information : IOPS MiB/s Average min max 00:09:20.645 PCIE (0000:00:13.0) NSID 1 from core 1: 6651.37 25.98 2405.09 967.59 9940.77 00:09:20.645 PCIE (0000:00:10.0) NSID 1 from core 1: 6651.37 25.98 2404.71 1028.33 10805.95 00:09:20.645 PCIE (0000:00:11.0) NSID 1 from core 1: 6651.37 25.98 2405.63 936.22 10711.63 00:09:20.645 PCIE (0000:00:12.0) NSID 1 from core 1: 6651.37 25.98 2405.79 928.62 10263.02 00:09:20.645 PCIE (0000:00:12.0) NSID 2 from core 1: 6651.37 25.98 2405.75 963.88 10573.48 00:09:20.645 PCIE (0000:00:12.0) NSID 3 from core 1: 6651.37 25.98 2406.53 964.05 9574.70 00:09:20.645 ======================================================== 00:09:20.645 Total : 39908.20 155.89 2405.58 928.62 10805.95 00:09:20.645 00:09:20.907 Initializing NVMe Controllers 00:09:20.907 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.907 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.907 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.907 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.907 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:20.907 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:20.907 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:20.907 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:20.907 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:20.907 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:20.907 Initialization complete. Launching workers. 00:09:20.907 ======================================================== 00:09:20.907 Latency(us) 00:09:20.907 Device Information : IOPS MiB/s Average min max 00:09:20.907 PCIE (0000:00:13.0) NSID 1 from core 2: 2405.44 9.40 6651.08 1668.33 21924.40 00:09:20.907 PCIE (0000:00:10.0) NSID 1 from core 2: 2405.44 9.40 6659.76 1876.43 22174.05 00:09:20.907 PCIE (0000:00:11.0) NSID 1 from core 2: 2405.44 9.40 6661.16 1826.30 23349.68 00:09:20.907 PCIE (0000:00:12.0) NSID 1 from core 2: 2405.44 9.40 6662.07 1743.94 27937.61 00:09:20.907 PCIE (0000:00:12.0) NSID 2 from core 2: 2405.44 9.40 6661.92 1700.84 23731.14 00:09:20.907 PCIE (0000:00:12.0) NSID 3 from core 2: 2405.44 9.40 6661.72 1568.27 23249.36 00:09:20.907 ======================================================== 00:09:20.907 Total : 14432.62 56.38 6659.62 1568.27 27937.61 00:09:20.907 00:09:20.907 00:31:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75773 00:09:22.823 Initializing NVMe Controllers 00:09:22.823 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.823 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.823 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:22.823 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:22.823 Initialization complete. Launching workers. 00:09:22.823 ======================================================== 00:09:22.823 Latency(us) 00:09:22.823 Device Information : IOPS MiB/s Average min max 00:09:22.823 PCIE (0000:00:13.0) NSID 1 from core 0: 8821.54 34.46 1813.36 781.12 12401.26 00:09:22.823 PCIE (0000:00:10.0) NSID 1 from core 0: 8821.54 34.46 1812.49 757.55 10920.39 00:09:22.823 PCIE (0000:00:11.0) NSID 1 from core 0: 8821.54 34.46 1813.33 761.71 12171.07 00:09:22.823 PCIE (0000:00:12.0) NSID 1 from core 0: 8821.54 34.46 1813.30 614.49 11424.45 00:09:22.823 PCIE (0000:00:12.0) NSID 2 from core 0: 8821.54 34.46 1813.27 541.30 11604.71 00:09:22.823 PCIE (0000:00:12.0) NSID 3 from core 0: 8821.54 34.46 1813.24 452.51 11373.57 00:09:22.823 ======================================================== 00:09:22.823 Total : 52929.27 206.75 1813.17 452.51 12401.26 00:09:22.823 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75774 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75843 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75844 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:22.823 00:31:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:26.127 Initializing NVMe Controllers 00:09:26.127 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.127 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:26.127 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:26.127 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:26.127 Initialization complete. Launching workers. 00:09:26.127 ======================================================== 00:09:26.127 Latency(us) 00:09:26.127 Device Information : IOPS MiB/s Average min max 00:09:26.127 PCIE (0000:00:13.0) NSID 1 from core 0: 4596.06 17.95 3480.75 821.67 9118.50 00:09:26.127 PCIE (0000:00:10.0) NSID 1 from core 0: 4596.06 17.95 3479.57 794.13 9993.98 00:09:26.127 PCIE (0000:00:11.0) NSID 1 from core 0: 4596.06 17.95 3480.61 816.87 11351.89 00:09:26.127 PCIE (0000:00:12.0) NSID 1 from core 0: 4596.06 17.95 3481.22 813.59 9474.65 00:09:26.127 PCIE (0000:00:12.0) NSID 2 from core 0: 4596.06 17.95 3481.62 811.40 10256.80 00:09:26.127 PCIE (0000:00:12.0) NSID 3 from core 0: 4601.39 17.97 3477.58 822.94 9749.64 00:09:26.127 ======================================================== 00:09:26.127 Total : 27581.71 107.74 3480.23 794.13 11351.89 00:09:26.127 00:09:26.127 Initializing NVMe Controllers 00:09:26.127 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:26.127 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:26.127 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:26.127 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:26.127 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:26.127 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:26.127 Initialization complete. Launching workers. 00:09:26.127 ======================================================== 00:09:26.127 Latency(us) 00:09:26.127 Device Information : IOPS MiB/s Average min max 00:09:26.127 PCIE (0000:00:13.0) NSID 1 from core 1: 4862.97 19.00 3289.64 1001.35 10101.09 00:09:26.127 PCIE (0000:00:10.0) NSID 1 from core 1: 4862.97 19.00 3288.68 941.22 10266.28 00:09:26.127 PCIE (0000:00:11.0) NSID 1 from core 1: 4862.97 19.00 3289.72 919.96 10244.40 00:09:26.127 PCIE (0000:00:12.0) NSID 1 from core 1: 4862.97 19.00 3289.64 931.26 9945.56 00:09:26.127 PCIE (0000:00:12.0) NSID 2 from core 1: 4862.97 19.00 3289.55 1038.48 10155.31 00:09:26.127 PCIE (0000:00:12.0) NSID 3 from core 1: 4862.97 19.00 3289.46 741.54 10234.88 00:09:26.127 ======================================================== 00:09:26.127 Total : 29177.83 113.98 3289.45 741.54 10266.28 00:09:26.127 00:09:28.042 Initializing NVMe Controllers 00:09:28.042 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.042 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.042 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.042 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.042 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:28.042 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:28.042 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:28.042 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:28.042 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:28.042 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:28.042 Initialization complete. Launching workers. 00:09:28.042 ======================================================== 00:09:28.042 Latency(us) 00:09:28.042 Device Information : IOPS MiB/s Average min max 00:09:28.042 PCIE (0000:00:13.0) NSID 1 from core 2: 2142.92 8.37 7465.92 1719.13 24069.64 00:09:28.042 PCIE (0000:00:10.0) NSID 1 from core 2: 2142.92 8.37 7465.12 1671.43 24041.21 00:09:28.042 PCIE (0000:00:11.0) NSID 1 from core 2: 2142.92 8.37 7465.93 1613.27 24265.70 00:09:28.042 PCIE (0000:00:12.0) NSID 1 from core 2: 2142.92 8.37 7465.71 1523.61 25676.26 00:09:28.042 PCIE (0000:00:12.0) NSID 2 from core 2: 2142.92 8.37 7465.11 1180.36 25874.14 00:09:28.042 PCIE (0000:00:12.0) NSID 3 from core 2: 2142.92 8.37 7464.52 888.04 26630.96 00:09:28.042 ======================================================== 00:09:28.042 Total : 12857.55 50.22 7465.38 888.04 26630.96 00:09:28.042 00:09:28.042 00:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75843 00:09:28.042 00:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75844 00:09:28.042 00:09:28.042 real 0m10.664s 00:09:28.042 user 0m18.242s 00:09:28.042 sys 0m0.697s 00:09:28.042 ************************************ 00:09:28.042 END TEST nvme_multi_secondary 00:09:28.042 ************************************ 00:09:28.042 00:32:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.042 00:32:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:28.042 00:32:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:28.042 00:32:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:28.042 00:32:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74788 ]] 00:09:28.042 00:32:04 nvme -- common/autotest_common.sh@1094 -- # kill 74788 00:09:28.042 00:32:04 nvme -- common/autotest_common.sh@1095 -- # wait 74788 00:09:28.042 [2024-11-27 00:32:04.762489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.762569] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.762588] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.762606] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.763429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.763475] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.763491] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.763510] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.764193] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.764238] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.764255] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.764280] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.765003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.765047] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.765065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.042 [2024-11-27 00:32:04.765085] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75716) is not found. Dropping the request. 00:09:28.303 00:32:04 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:28.303 00:32:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:28.303 00:32:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:28.303 00:32:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.303 00:32:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.303 00:32:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.303 ************************************ 00:09:28.303 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:28.303 ************************************ 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:28.303 * Looking for test storage... 00:09:28.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.303 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.304 --rc genhtml_branch_coverage=1 00:09:28.304 --rc genhtml_function_coverage=1 00:09:28.304 --rc genhtml_legend=1 00:09:28.304 --rc geninfo_all_blocks=1 00:09:28.304 --rc geninfo_unexecuted_blocks=1 00:09:28.304 00:09:28.304 ' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.304 --rc genhtml_branch_coverage=1 00:09:28.304 --rc genhtml_function_coverage=1 00:09:28.304 --rc genhtml_legend=1 00:09:28.304 --rc geninfo_all_blocks=1 00:09:28.304 --rc geninfo_unexecuted_blocks=1 00:09:28.304 00:09:28.304 ' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.304 --rc genhtml_branch_coverage=1 00:09:28.304 --rc genhtml_function_coverage=1 00:09:28.304 --rc genhtml_legend=1 00:09:28.304 --rc geninfo_all_blocks=1 00:09:28.304 --rc geninfo_unexecuted_blocks=1 00:09:28.304 00:09:28.304 ' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.304 --rc genhtml_branch_coverage=1 00:09:28.304 --rc genhtml_function_coverage=1 00:09:28.304 --rc genhtml_legend=1 00:09:28.304 --rc geninfo_all_blocks=1 00:09:28.304 --rc geninfo_unexecuted_blocks=1 00:09:28.304 00:09:28.304 ' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:28.304 00:32:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:28.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76006 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76006 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 76006 ']' 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.304 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:28.565 [2024-11-27 00:32:05.122789] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:09:28.565 [2024-11-27 00:32:05.122899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:09:28.565 [2024-11-27 00:32:05.281553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.565 [2024-11-27 00:32:05.309164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.565 [2024-11-27 00:32:05.309383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.565 [2024-11-27 00:32:05.309618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.565 [2024-11-27 00:32:05.309707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.509 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.509 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:29.509 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:29.509 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.509 00:32:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:29.509 nvme0n1 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Ok9XQ.txt 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:29.509 true 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732667526 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76028 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:29.509 00:32:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 [2024-11-27 00:32:08.055344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:31.500 [2024-11-27 00:32:08.055637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:31.500 [2024-11-27 00:32:08.055657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:31.500 [2024-11-27 00:32:08.055671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:31.500 [2024-11-27 00:32:08.057714] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:31.500 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76028 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76028 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76028 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Ok9XQ.txt 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Ok9XQ.txt 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76006 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 76006 ']' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 76006 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76006 00:09:31.500 killing process with pid 76006 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76006' 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 76006 00:09:31.500 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 76006 00:09:31.759 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:31.759 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:31.759 ************************************ 00:09:31.759 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:31.759 ************************************ 00:09:31.759 00:09:31.759 real 0m3.614s 00:09:31.759 user 0m12.909s 00:09:31.759 sys 0m0.503s 00:09:31.759 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.759 00:32:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.759 00:32:08 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:31.759 00:32:08 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:31.759 00:32:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.759 00:32:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.759 00:32:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.759 ************************************ 00:09:31.759 START TEST nvme_fio 00:09:31.759 ************************************ 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:31.759 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:31.759 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:31.759 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.759 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:32.017 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:32.017 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:32.017 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:32.275 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:32.275 00:32:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.275 00:32:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:32.275 00:32:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:32.275 00:32:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:32.275 00:32:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:32.275 00:32:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:32.275 00:32:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:32.534 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:32.534 fio-3.35 00:09:32.534 Starting 1 thread 00:09:36.744 00:09:36.744 test: (groupid=0, jobs=1): err= 0: pid=76152: Wed Nov 27 00:32:13 2024 00:09:36.744 read: IOPS=22.1k, BW=86.2MiB/s (90.4MB/s)(172MiB/2001msec) 00:09:36.744 slat (nsec): min=3256, max=78853, avg=5117.67, stdev=2346.50 00:09:36.744 clat (usec): min=679, max=9053, avg=2894.10, stdev=899.38 00:09:36.744 lat (usec): min=682, max=9118, avg=2899.21, stdev=900.69 00:09:36.744 clat percentiles (usec): 00:09:36.744 | 1.00th=[ 1942], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:36.744 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:36.744 | 70.00th=[ 2802], 80.00th=[ 3163], 90.00th=[ 4015], 95.00th=[ 5080], 00:09:36.744 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 7504], 00:09:36.744 | 99.99th=[ 8979] 00:09:36.744 bw ( KiB/s): min=81920, max=92872, per=100.00%, avg=88720.00, stdev=5936.79, samples=3 00:09:36.744 iops : min=20480, max=23218, avg=22180.00, stdev=1484.20, samples=3 00:09:36.744 write: IOPS=21.9k, BW=85.6MiB/s (89.8MB/s)(171MiB/2001msec); 0 zone resets 00:09:36.744 slat (nsec): min=3386, max=56888, avg=5426.46, stdev=2261.26 00:09:36.744 clat (usec): min=193, max=8974, avg=2904.94, stdev=906.68 00:09:36.744 lat (usec): min=198, max=8989, avg=2910.37, stdev=907.98 00:09:36.744 clat percentiles (usec): 00:09:36.744 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:36.744 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:09:36.744 | 70.00th=[ 2802], 80.00th=[ 3163], 90.00th=[ 4047], 95.00th=[ 5080], 00:09:36.744 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7111], 99.95th=[ 8094], 00:09:36.744 | 99.99th=[ 8848] 00:09:36.744 bw ( KiB/s): min=83856, max=92592, per=100.00%, avg=88917.33, stdev=4530.07, samples=3 00:09:36.745 iops : min=20962, max=23148, avg=22229.33, stdev=1133.94, samples=3 00:09:36.745 lat (usec) : 250=0.01%, 750=0.01%, 1000=0.03% 00:09:36.745 lat (msec) : 2=1.27%, 4=88.39%, 10=10.30% 00:09:36.745 cpu : usr=99.10%, sys=0.15%, ctx=7, majf=0, minf=626 00:09:36.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:36.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.745 issued rwts: total=44159,43869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.745 00:09:36.745 Run status group 0 (all jobs): 00:09:36.745 READ: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=172MiB (181MB), run=2001-2001msec 00:09:36.745 WRITE: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=171MiB (180MB), run=2001-2001msec 00:09:37.005 ----------------------------------------------------- 00:09:37.005 Suppressions used: 00:09:37.005 count bytes template 00:09:37.005 1 32 /usr/src/fio/parse.c 00:09:37.005 1 8 libtcmalloc_minimal.so 00:09:37.005 ----------------------------------------------------- 00:09:37.005 00:09:37.005 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:37.005 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:37.005 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:37.005 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:37.266 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:37.266 00:32:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:37.528 00:32:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:37.528 00:32:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:37.528 00:32:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:37.789 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:37.789 fio-3.35 00:09:37.789 Starting 1 thread 00:09:44.376 00:09:44.376 test: (groupid=0, jobs=1): err= 0: pid=76208: Wed Nov 27 00:32:20 2024 00:09:44.376 read: IOPS=23.2k, BW=90.5MiB/s (94.8MB/s)(181MiB/2001msec) 00:09:44.376 slat (nsec): min=4184, max=59523, avg=4932.30, stdev=1853.00 00:09:44.376 clat (usec): min=295, max=10019, avg=2760.78, stdev=823.28 00:09:44.376 lat (usec): min=299, max=10074, avg=2765.72, stdev=824.29 00:09:44.376 clat percentiles (usec): 00:09:44.376 | 1.00th=[ 1942], 5.00th=[ 2089], 10.00th=[ 2180], 20.00th=[ 2278], 00:09:44.376 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:44.376 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 3621], 95.00th=[ 4490], 00:09:44.376 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 8160], 00:09:44.376 | 99.99th=[ 9896] 00:09:44.376 bw ( KiB/s): min=84368, max=99064, per=97.51%, avg=90320.00, stdev=7735.60, samples=3 00:09:44.377 iops : min=21092, max=24766, avg=22580.00, stdev=1933.90, samples=3 00:09:44.377 write: IOPS=23.0k, BW=89.9MiB/s (94.3MB/s)(180MiB/2001msec); 0 zone resets 00:09:44.377 slat (nsec): min=4318, max=53404, avg=5186.63, stdev=1859.80 00:09:44.377 clat (usec): min=287, max=9945, avg=2764.81, stdev=820.33 00:09:44.377 lat (usec): min=291, max=9959, avg=2770.00, stdev=821.34 00:09:44.377 clat percentiles (usec): 00:09:44.377 | 1.00th=[ 1942], 5.00th=[ 2089], 10.00th=[ 2180], 20.00th=[ 2278], 00:09:44.377 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:44.377 | 70.00th=[ 2737], 80.00th=[ 2966], 90.00th=[ 3621], 95.00th=[ 4490], 00:09:44.377 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 8291], 00:09:44.377 | 99.99th=[ 9765] 00:09:44.377 bw ( KiB/s): min=84280, max=100088, per=98.36%, avg=90568.00, stdev=8384.96, samples=3 00:09:44.377 iops : min=21070, max=25022, avg=22642.00, stdev=2096.24, samples=3 00:09:44.377 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:09:44.377 lat (msec) : 2=1.85%, 4=90.83%, 10=7.28%, 20=0.01% 00:09:44.377 cpu : usr=99.20%, sys=0.15%, ctx=4, majf=0, minf=626 00:09:44.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:44.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.377 issued rwts: total=46334,46060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.377 00:09:44.377 Run status group 0 (all jobs): 00:09:44.377 READ: bw=90.5MiB/s (94.8MB/s), 90.5MiB/s-90.5MiB/s (94.8MB/s-94.8MB/s), io=181MiB (190MB), run=2001-2001msec 00:09:44.377 WRITE: bw=89.9MiB/s (94.3MB/s), 89.9MiB/s-89.9MiB/s (94.3MB/s-94.3MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:44.377 ----------------------------------------------------- 00:09:44.377 Suppressions used: 00:09:44.377 count bytes template 00:09:44.377 1 32 /usr/src/fio/parse.c 00:09:44.377 1 8 libtcmalloc_minimal.so 00:09:44.377 ----------------------------------------------------- 00:09:44.377 00:09:44.377 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:44.377 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:44.377 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:44.377 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:44.639 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:44.639 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:44.901 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:44.901 00:32:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:44.901 00:32:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:44.901 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:44.901 fio-3.35 00:09:44.901 Starting 1 thread 00:09:50.191 00:09:50.191 test: (groupid=0, jobs=1): err= 0: pid=76268: Wed Nov 27 00:32:26 2024 00:09:50.191 read: IOPS=17.7k, BW=69.0MiB/s (72.4MB/s)(138MiB/2001msec) 00:09:50.191 slat (nsec): min=4783, max=71783, avg=6056.83, stdev=2571.06 00:09:50.191 clat (usec): min=487, max=13003, avg=3599.68, stdev=1303.29 00:09:50.191 lat (usec): min=494, max=13018, avg=3605.74, stdev=1304.16 00:09:50.191 clat percentiles (usec): 00:09:50.191 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:50.191 | 30.00th=[ 2638], 40.00th=[ 2835], 50.00th=[ 3064], 60.00th=[ 3425], 00:09:50.191 | 70.00th=[ 4178], 80.00th=[ 4817], 90.00th=[ 5538], 95.00th=[ 6063], 00:09:50.191 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[10552], 99.95th=[11994], 00:09:50.191 | 99.99th=[12780] 00:09:50.191 bw ( KiB/s): min=61760, max=77288, per=100.00%, avg=71877.33, stdev=8768.94, samples=3 00:09:50.191 iops : min=15440, max=19320, avg=17968.67, stdev=2191.62, samples=3 00:09:50.191 write: IOPS=17.7k, BW=69.0MiB/s (72.4MB/s)(138MiB/2001msec); 0 zone resets 00:09:50.191 slat (nsec): min=4891, max=53216, avg=6455.98, stdev=2626.59 00:09:50.191 clat (usec): min=499, max=12955, avg=3618.65, stdev=1322.42 00:09:50.191 lat (usec): min=506, max=12961, avg=3625.10, stdev=1323.29 00:09:50.191 clat percentiles (usec): 00:09:50.191 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:50.191 | 30.00th=[ 2671], 40.00th=[ 2835], 50.00th=[ 3097], 60.00th=[ 3458], 00:09:50.191 | 70.00th=[ 4228], 80.00th=[ 4883], 90.00th=[ 5604], 95.00th=[ 6128], 00:09:50.192 | 99.00th=[ 7111], 99.50th=[ 7701], 99.90th=[10945], 99.95th=[11994], 00:09:50.192 | 99.99th=[12780] 00:09:50.192 bw ( KiB/s): min=61536, max=77536, per=100.00%, avg=71824.00, stdev=8927.76, samples=3 00:09:50.192 iops : min=15384, max=19384, avg=17956.00, stdev=2231.94, samples=3 00:09:50.192 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:50.192 lat (msec) : 2=0.19%, 4=67.43%, 10=32.21%, 20=0.16% 00:09:50.192 cpu : usr=99.05%, sys=0.05%, ctx=5, majf=0, minf=625 00:09:50.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:50.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.192 issued rwts: total=35369,35368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.192 00:09:50.192 Run status group 0 (all jobs): 00:09:50.192 READ: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:09:50.192 WRITE: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:09:50.452 ----------------------------------------------------- 00:09:50.452 Suppressions used: 00:09:50.453 count bytes template 00:09:50.453 1 32 /usr/src/fio/parse.c 00:09:50.453 1 8 libtcmalloc_minimal.so 00:09:50.453 ----------------------------------------------------- 00:09:50.453 00:09:50.453 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:50.453 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:50.453 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:50.453 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:50.712 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:50.712 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:50.972 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:50.972 00:32:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:50.972 00:32:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:50.972 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:50.972 fio-3.35 00:09:50.972 Starting 1 thread 00:09:59.110 00:09:59.110 test: (groupid=0, jobs=1): err= 0: pid=76330: Wed Nov 27 00:32:35 2024 00:09:59.110 read: IOPS=24.7k, BW=96.5MiB/s (101MB/s)(193MiB/2001msec) 00:09:59.110 slat (nsec): min=4193, max=45738, avg=4827.07, stdev=1761.01 00:09:59.110 clat (usec): min=193, max=13292, avg=2589.11, stdev=737.50 00:09:59.110 lat (usec): min=197, max=13336, avg=2593.93, stdev=738.57 00:09:59.110 clat percentiles (usec): 00:09:59.110 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2212], 00:09:59.110 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:59.110 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 3916], 00:09:59.110 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[ 7963], 99.95th=[ 9241], 00:09:59.110 | 99.99th=[12911] 00:09:59.110 bw ( KiB/s): min=94064, max=105480, per=100.00%, avg=98978.67, stdev=5871.06, samples=3 00:09:59.110 iops : min=23516, max=26370, avg=24744.67, stdev=1467.77, samples=3 00:09:59.110 write: IOPS=24.6k, BW=95.9MiB/s (101MB/s)(192MiB/2001msec); 0 zone resets 00:09:59.110 slat (nsec): min=4255, max=84404, avg=5104.07, stdev=1837.27 00:09:59.110 clat (usec): min=209, max=13107, avg=2589.26, stdev=734.27 00:09:59.110 lat (usec): min=213, max=13121, avg=2594.37, stdev=735.38 00:09:59.110 clat percentiles (usec): 00:09:59.110 | 1.00th=[ 1876], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2212], 00:09:59.110 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:09:59.110 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 3851], 00:09:59.110 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 8160], 99.95th=[ 9765], 00:09:59.110 | 99.99th=[12518] 00:09:59.110 bw ( KiB/s): min=93920, max=105456, per=100.00%, avg=99050.67, stdev=5872.68, samples=3 00:09:59.110 iops : min=23480, max=26364, avg=24762.67, stdev=1468.17, samples=3 00:09:59.110 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:59.110 lat (msec) : 2=3.15%, 4=92.07%, 10=4.68%, 20=0.04% 00:09:59.111 cpu : usr=99.30%, sys=0.05%, ctx=2, majf=0, minf=624 00:09:59.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:59.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.111 issued rwts: total=49448,49136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.111 00:09:59.111 Run status group 0 (all jobs): 00:09:59.111 READ: bw=96.5MiB/s (101MB/s), 96.5MiB/s-96.5MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2001-2001msec 00:09:59.111 WRITE: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=192MiB (201MB), run=2001-2001msec 00:09:59.111 ----------------------------------------------------- 00:09:59.111 Suppressions used: 00:09:59.111 count bytes template 00:09:59.111 1 32 /usr/src/fio/parse.c 00:09:59.111 1 8 libtcmalloc_minimal.so 00:09:59.111 ----------------------------------------------------- 00:09:59.111 00:09:59.111 ************************************ 00:09:59.111 END TEST nvme_fio 00:09:59.111 ************************************ 00:09:59.111 00:32:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:59.111 00:32:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:59.111 00:09:59.111 real 0m27.129s 00:09:59.111 user 0m16.214s 00:09:59.111 sys 0m20.029s 00:09:59.111 00:32:35 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.111 00:32:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:59.111 ************************************ 00:09:59.111 END TEST nvme 00:09:59.111 ************************************ 00:09:59.111 00:09:59.111 real 1m36.354s 00:09:59.111 user 3m34.731s 00:09:59.111 sys 0m30.812s 00:09:59.111 00:32:35 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.111 00:32:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.111 00:32:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:59.111 00:32:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:59.111 00:32:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.111 00:32:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.111 00:32:35 -- common/autotest_common.sh@10 -- # set +x 00:09:59.111 ************************************ 00:09:59.111 START TEST nvme_scc 00:09:59.111 ************************************ 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:59.111 * Looking for test storage... 00:09:59.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.111 --rc genhtml_branch_coverage=1 00:09:59.111 --rc genhtml_function_coverage=1 00:09:59.111 --rc genhtml_legend=1 00:09:59.111 --rc geninfo_all_blocks=1 00:09:59.111 --rc geninfo_unexecuted_blocks=1 00:09:59.111 00:09:59.111 ' 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.111 --rc genhtml_branch_coverage=1 00:09:59.111 --rc genhtml_function_coverage=1 00:09:59.111 --rc genhtml_legend=1 00:09:59.111 --rc geninfo_all_blocks=1 00:09:59.111 --rc geninfo_unexecuted_blocks=1 00:09:59.111 00:09:59.111 ' 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.111 --rc genhtml_branch_coverage=1 00:09:59.111 --rc genhtml_function_coverage=1 00:09:59.111 --rc genhtml_legend=1 00:09:59.111 --rc geninfo_all_blocks=1 00:09:59.111 --rc geninfo_unexecuted_blocks=1 00:09:59.111 00:09:59.111 ' 00:09:59.111 00:32:35 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.111 --rc genhtml_branch_coverage=1 00:09:59.111 --rc genhtml_function_coverage=1 00:09:59.111 --rc genhtml_legend=1 00:09:59.111 --rc geninfo_all_blocks=1 00:09:59.111 --rc geninfo_unexecuted_blocks=1 00:09:59.111 00:09:59.111 ' 00:09:59.111 00:32:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.111 00:32:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.111 00:32:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.111 00:32:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.111 00:32:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.111 00:32:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:59.111 00:32:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:59.111 00:32:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:59.111 00:32:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.111 00:32:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:59.111 00:32:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:59.111 00:32:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:59.111 00:32:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.683 Waiting for block devices as requested 00:09:59.683 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.683 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.943 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.943 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.243 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:05.243 00:32:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:05.243 00:32:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:05.243 00:32:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.243 00:32:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:05.243 00:32:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:05.243 00:32:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:05.244 00:32:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.244 00:32:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:05.244 00:32:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.244 00:32:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.244 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.245 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.246 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.247 00:32:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:05.248 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.249 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.250 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.251 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:05.252 00:32:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.252 00:32:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:05.252 00:32:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.252 00:32:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:05.252 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:05.253 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.254 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.255 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:05.256 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.257 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.258 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.259 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:05.260 00:32:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.260 00:32:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:05.260 00:32:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.260 00:32:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.260 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.261 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:05.262 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:05.263 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.264 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.265 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:05.266 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.267 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.268 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:05.269 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.270 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.271 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.272 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.273 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.274 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:05.275 00:32:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:05.275 00:32:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:05.276 00:32:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:05.276 00:32:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.276 00:32:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.276 00:32:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:05.539 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.540 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.541 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:05.542 00:32:42 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:05.542 00:32:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:05.543 00:32:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:05.543 00:32:42 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.376 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.688 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.688 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.688 00:32:43 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:06.688 00:32:43 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:06.688 00:32:43 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.688 00:32:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:06.688 ************************************ 00:10:06.688 START TEST nvme_simple_copy 00:10:06.688 ************************************ 00:10:06.688 00:32:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:06.986 Initializing NVMe Controllers 00:10:06.986 Attaching to 0000:00:10.0 00:10:06.986 Controller supports SCC. Attached to 0000:00:10.0 00:10:06.986 Namespace ID: 1 size: 6GB 00:10:06.986 Initialization complete. 00:10:06.986 00:10:06.986 Controller QEMU NVMe Ctrl (12340 ) 00:10:06.986 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:06.986 Namespace Block Size:4096 00:10:06.986 Writing LBAs 0 to 63 with Random Data 00:10:06.986 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:06.986 LBAs matching Written Data: 64 00:10:06.986 00:10:06.986 real 0m0.291s 00:10:06.986 user 0m0.111s 00:10:06.986 sys 0m0.076s 00:10:06.986 00:32:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.986 00:32:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:06.986 ************************************ 00:10:06.986 END TEST nvme_simple_copy 00:10:06.986 ************************************ 00:10:06.986 00:10:06.986 real 0m7.930s 00:10:06.986 user 0m1.148s 00:10:06.986 sys 0m1.532s 00:10:06.986 ************************************ 00:10:06.986 00:32:43 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.986 00:32:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:06.986 END TEST nvme_scc 00:10:06.986 ************************************ 00:10:06.986 00:32:43 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:06.986 00:32:43 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:06.986 00:32:43 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:06.986 00:32:43 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:06.986 00:32:43 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:06.986 00:32:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.986 00:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.986 00:32:43 -- common/autotest_common.sh@10 -- # set +x 00:10:06.986 ************************************ 00:10:06.986 START TEST nvme_fdp 00:10:06.986 ************************************ 00:10:06.986 00:32:43 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:07.248 * Looking for test storage... 00:10:07.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.249 --rc genhtml_branch_coverage=1 00:10:07.249 --rc genhtml_function_coverage=1 00:10:07.249 --rc genhtml_legend=1 00:10:07.249 --rc geninfo_all_blocks=1 00:10:07.249 --rc geninfo_unexecuted_blocks=1 00:10:07.249 00:10:07.249 ' 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.249 --rc genhtml_branch_coverage=1 00:10:07.249 --rc genhtml_function_coverage=1 00:10:07.249 --rc genhtml_legend=1 00:10:07.249 --rc geninfo_all_blocks=1 00:10:07.249 --rc geninfo_unexecuted_blocks=1 00:10:07.249 00:10:07.249 ' 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.249 --rc genhtml_branch_coverage=1 00:10:07.249 --rc genhtml_function_coverage=1 00:10:07.249 --rc genhtml_legend=1 00:10:07.249 --rc geninfo_all_blocks=1 00:10:07.249 --rc geninfo_unexecuted_blocks=1 00:10:07.249 00:10:07.249 ' 00:10:07.249 00:32:43 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.249 --rc genhtml_branch_coverage=1 00:10:07.249 --rc genhtml_function_coverage=1 00:10:07.249 --rc genhtml_legend=1 00:10:07.249 --rc geninfo_all_blocks=1 00:10:07.249 --rc geninfo_unexecuted_blocks=1 00:10:07.249 00:10:07.249 ' 00:10:07.249 00:32:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.249 00:32:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.249 00:32:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.249 00:32:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.249 00:32:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.249 00:32:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:07.249 00:32:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:07.249 00:32:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:07.249 00:32:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.249 00:32:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:07.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.773 Waiting for block devices as requested 00:10:07.773 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.034 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.034 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.338 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:13.338 00:32:49 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:13.338 00:32:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:13.338 00:32:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:13.338 00:32:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.338 00:32:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:13.338 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:13.339 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.340 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.341 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.342 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.343 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.344 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:13.345 00:32:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:13.345 00:32:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:13.345 00:32:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.345 00:32:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:13.345 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:13.346 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.347 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.348 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.349 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.350 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:13.351 00:32:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:13.351 00:32:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:13.351 00:32:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.351 00:32:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.351 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.352 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.353 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.354 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.355 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.356 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:13.357 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:13.358 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:13.359 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.360 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.361 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.628 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.629 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.630 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:13.631 00:32:50 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:13.631 00:32:50 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:13.631 00:32:50 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.631 00:32:50 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:13.631 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.632 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.633 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:13.634 00:32:50 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:13.634 00:32:50 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:13.634 00:32:50 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:14.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.779 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.779 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.779 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.779 00:32:51 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:14.779 00:32:51 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.779 00:32:51 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.779 00:32:51 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:14.779 ************************************ 00:10:14.779 START TEST nvme_flexible_data_placement 00:10:14.779 ************************************ 00:10:14.779 00:32:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:15.041 Initializing NVMe Controllers 00:10:15.041 Attaching to 0000:00:13.0 00:10:15.041 Controller supports FDP Attached to 0000:00:13.0 00:10:15.041 Namespace ID: 1 Endurance Group ID: 1 00:10:15.041 Initialization complete. 00:10:15.041 00:10:15.041 ================================== 00:10:15.041 == FDP tests for Namespace: #01 == 00:10:15.041 ================================== 00:10:15.041 00:10:15.041 Get Feature: FDP: 00:10:15.041 ================= 00:10:15.041 Enabled: Yes 00:10:15.041 FDP configuration Index: 0 00:10:15.041 00:10:15.041 FDP configurations log page 00:10:15.041 =========================== 00:10:15.041 Number of FDP configurations: 1 00:10:15.041 Version: 0 00:10:15.041 Size: 112 00:10:15.041 FDP Configuration Descriptor: 0 00:10:15.041 Descriptor Size: 96 00:10:15.041 Reclaim Group Identifier format: 2 00:10:15.041 FDP Volatile Write Cache: Not Present 00:10:15.041 FDP Configuration: Valid 00:10:15.041 Vendor Specific Size: 0 00:10:15.041 Number of Reclaim Groups: 2 00:10:15.041 Number of Recalim Unit Handles: 8 00:10:15.041 Max Placement Identifiers: 128 00:10:15.041 Number of Namespaces Suppprted: 256 00:10:15.041 Reclaim unit Nominal Size: 6000000 bytes 00:10:15.041 Estimated Reclaim Unit Time Limit: Not Reported 00:10:15.041 RUH Desc #000: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #001: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #002: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #003: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #004: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #005: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #006: RUH Type: Initially Isolated 00:10:15.041 RUH Desc #007: RUH Type: Initially Isolated 00:10:15.041 00:10:15.041 FDP reclaim unit handle usage log page 00:10:15.041 ====================================== 00:10:15.041 Number of Reclaim Unit Handles: 8 00:10:15.041 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:15.041 RUH Usage Desc #001: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #002: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #003: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #004: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #005: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #006: RUH Attributes: Unused 00:10:15.041 RUH Usage Desc #007: RUH Attributes: Unused 00:10:15.041 00:10:15.041 FDP statistics log page 00:10:15.041 ======================= 00:10:15.041 Host bytes with metadata written: 1498460160 00:10:15.041 Media bytes with metadata written: 1498677248 00:10:15.041 Media bytes erased: 0 00:10:15.041 00:10:15.041 FDP Reclaim unit handle status 00:10:15.041 ============================== 00:10:15.041 Number of RUHS descriptors: 2 00:10:15.041 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000af5 00:10:15.041 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:15.041 00:10:15.041 FDP write on placement id: 0 success 00:10:15.041 00:10:15.041 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:15.041 00:10:15.041 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:15.041 00:10:15.041 Get Feature: FDP Events for Placement handle: #0 00:10:15.041 ======================== 00:10:15.041 Number of FDP Events: 6 00:10:15.041 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:15.041 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:15.041 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:15.041 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:15.041 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:15.041 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:15.041 00:10:15.041 FDP events log page 00:10:15.041 =================== 00:10:15.042 Number of FDP events: 1 00:10:15.042 FDP Event #0: 00:10:15.042 Event Type: RU Not Written to Capacity 00:10:15.042 Placement Identifier: Valid 00:10:15.042 NSID: Valid 00:10:15.042 Location: Valid 00:10:15.042 Placement Identifier: 0 00:10:15.042 Event Timestamp: 5 00:10:15.042 Namespace Identifier: 1 00:10:15.042 Reclaim Group Identifier: 0 00:10:15.042 Reclaim Unit Handle Identifier: 0 00:10:15.042 00:10:15.042 FDP test passed 00:10:15.042 00:10:15.042 real 0m0.245s 00:10:15.042 user 0m0.068s 00:10:15.042 sys 0m0.075s 00:10:15.042 00:32:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.042 ************************************ 00:10:15.042 00:32:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:15.042 END TEST nvme_flexible_data_placement 00:10:15.042 ************************************ 00:10:15.042 00:10:15.042 real 0m8.001s 00:10:15.042 user 0m1.134s 00:10:15.042 sys 0m1.602s 00:10:15.042 00:32:51 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.042 ************************************ 00:10:15.042 END TEST nvme_fdp 00:10:15.042 ************************************ 00:10:15.042 00:32:51 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:15.042 00:32:51 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:15.042 00:32:51 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:15.042 00:32:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.042 00:32:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.042 00:32:51 -- common/autotest_common.sh@10 -- # set +x 00:10:15.042 ************************************ 00:10:15.042 START TEST nvme_rpc 00:10:15.042 ************************************ 00:10:15.042 00:32:51 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:15.303 * Looking for test storage... 00:10:15.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.303 00:32:51 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.303 --rc genhtml_branch_coverage=1 00:10:15.303 --rc genhtml_function_coverage=1 00:10:15.303 --rc genhtml_legend=1 00:10:15.303 --rc geninfo_all_blocks=1 00:10:15.303 --rc geninfo_unexecuted_blocks=1 00:10:15.303 00:10:15.303 ' 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.303 --rc genhtml_branch_coverage=1 00:10:15.303 --rc genhtml_function_coverage=1 00:10:15.303 --rc genhtml_legend=1 00:10:15.303 --rc geninfo_all_blocks=1 00:10:15.303 --rc geninfo_unexecuted_blocks=1 00:10:15.303 00:10:15.303 ' 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.303 --rc genhtml_branch_coverage=1 00:10:15.303 --rc genhtml_function_coverage=1 00:10:15.303 --rc genhtml_legend=1 00:10:15.303 --rc geninfo_all_blocks=1 00:10:15.303 --rc geninfo_unexecuted_blocks=1 00:10:15.303 00:10:15.303 ' 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.303 --rc genhtml_branch_coverage=1 00:10:15.303 --rc genhtml_function_coverage=1 00:10:15.303 --rc genhtml_legend=1 00:10:15.303 --rc geninfo_all_blocks=1 00:10:15.303 --rc geninfo_unexecuted_blocks=1 00:10:15.303 00:10:15.303 ' 00:10:15.303 00:32:51 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.303 00:32:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:15.303 00:32:51 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:15.303 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:15.303 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:15.303 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77712 00:10:15.303 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:15.303 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77712 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77712 ']' 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.303 00:32:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.565 [2024-11-27 00:32:52.090192] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:10:15.565 [2024-11-27 00:32:52.090342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77712 ] 00:10:15.565 [2024-11-27 00:32:52.245579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:15.565 [2024-11-27 00:32:52.276194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.565 [2024-11-27 00:32:52.276289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.509 00:32:52 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.509 00:32:52 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.509 00:32:52 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:16.509 Nvme0n1 00:10:16.509 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:16.509 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:16.771 request: 00:10:16.771 { 00:10:16.771 "bdev_name": "Nvme0n1", 00:10:16.771 "filename": "non_existing_file", 00:10:16.771 "method": "bdev_nvme_apply_firmware", 00:10:16.771 "req_id": 1 00:10:16.771 } 00:10:16.771 Got JSON-RPC error response 00:10:16.771 response: 00:10:16.771 { 00:10:16.771 "code": -32603, 00:10:16.771 "message": "open file failed." 00:10:16.771 } 00:10:16.771 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:16.771 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:16.771 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:17.033 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:17.033 00:32:53 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77712 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77712 ']' 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77712 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77712 00:10:17.033 killing process with pid 77712 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77712' 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77712 00:10:17.033 00:32:53 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77712 00:10:17.607 00:10:17.607 real 0m2.324s 00:10:17.607 user 0m4.426s 00:10:17.607 sys 0m0.595s 00:10:17.607 00:32:54 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.608 ************************************ 00:10:17.608 END TEST nvme_rpc 00:10:17.608 ************************************ 00:10:17.608 00:32:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.608 00:32:54 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:17.608 00:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.608 00:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.608 00:32:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.608 ************************************ 00:10:17.608 START TEST nvme_rpc_timeouts 00:10:17.608 ************************************ 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:17.608 * Looking for test storage... 00:10:17.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:17.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.608 00:32:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.608 --rc genhtml_branch_coverage=1 00:10:17.608 --rc genhtml_function_coverage=1 00:10:17.608 --rc genhtml_legend=1 00:10:17.608 --rc geninfo_all_blocks=1 00:10:17.608 --rc geninfo_unexecuted_blocks=1 00:10:17.608 00:10:17.608 ' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.608 --rc genhtml_branch_coverage=1 00:10:17.608 --rc genhtml_function_coverage=1 00:10:17.608 --rc genhtml_legend=1 00:10:17.608 --rc geninfo_all_blocks=1 00:10:17.608 --rc geninfo_unexecuted_blocks=1 00:10:17.608 00:10:17.608 ' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.608 --rc genhtml_branch_coverage=1 00:10:17.608 --rc genhtml_function_coverage=1 00:10:17.608 --rc genhtml_legend=1 00:10:17.608 --rc geninfo_all_blocks=1 00:10:17.608 --rc geninfo_unexecuted_blocks=1 00:10:17.608 00:10:17.608 ' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.608 --rc genhtml_branch_coverage=1 00:10:17.608 --rc genhtml_function_coverage=1 00:10:17.608 --rc genhtml_legend=1 00:10:17.608 --rc geninfo_all_blocks=1 00:10:17.608 --rc geninfo_unexecuted_blocks=1 00:10:17.608 00:10:17.608 ' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77766 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77766 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77798 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77798 00:10:17.608 00:32:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77798 ']' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.608 00:32:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:17.871 [2024-11-27 00:32:54.416985] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:10:17.871 [2024-11-27 00:32:54.417408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77798 ] 00:10:17.871 [2024-11-27 00:32:54.597072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.871 [2024-11-27 00:32:54.628102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.871 [2024-11-27 00:32:54.628159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.817 00:32:55 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.817 00:32:55 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:18.817 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:18.817 Checking default timeout settings: 00:10:18.817 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:19.078 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:19.078 Making settings changes with rpc: 00:10:19.078 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:19.078 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:19.078 Check default vs. modified settings: 00:10:19.078 00:32:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 Setting action_on_timeout is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 Setting timeout_us is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:19.651 Setting timeout_admin_us is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77766 /tmp/settings_modified_77766 00:10:19.651 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77798 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77798 ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77798 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77798 00:10:19.651 killing process with pid 77798 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77798' 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77798 00:10:19.651 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77798 00:10:19.914 RPC TIMEOUT SETTING TEST PASSED. 00:10:19.914 00:32:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:19.914 00:10:19.914 real 0m2.280s 00:10:19.914 user 0m4.455s 00:10:19.914 sys 0m0.562s 00:10:19.914 ************************************ 00:10:19.914 END TEST nvme_rpc_timeouts 00:10:19.914 ************************************ 00:10:19.914 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.914 00:32:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:19.914 00:32:56 -- spdk/autotest.sh@239 -- # uname -s 00:10:19.914 00:32:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:19.914 00:32:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:19.914 00:32:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.914 00:32:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.914 00:32:56 -- common/autotest_common.sh@10 -- # set +x 00:10:19.914 ************************************ 00:10:19.914 START TEST sw_hotplug 00:10:19.914 ************************************ 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:19.914 * Looking for test storage... 00:10:19.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.914 00:32:56 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.914 --rc genhtml_branch_coverage=1 00:10:19.914 --rc genhtml_function_coverage=1 00:10:19.914 --rc genhtml_legend=1 00:10:19.914 --rc geninfo_all_blocks=1 00:10:19.914 --rc geninfo_unexecuted_blocks=1 00:10:19.914 00:10:19.914 ' 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.914 --rc genhtml_branch_coverage=1 00:10:19.914 --rc genhtml_function_coverage=1 00:10:19.914 --rc genhtml_legend=1 00:10:19.914 --rc geninfo_all_blocks=1 00:10:19.914 --rc geninfo_unexecuted_blocks=1 00:10:19.914 00:10:19.914 ' 00:10:19.914 00:32:56 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.914 --rc genhtml_branch_coverage=1 00:10:19.915 --rc genhtml_function_coverage=1 00:10:19.915 --rc genhtml_legend=1 00:10:19.915 --rc geninfo_all_blocks=1 00:10:19.915 --rc geninfo_unexecuted_blocks=1 00:10:19.915 00:10:19.915 ' 00:10:19.915 00:32:56 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.915 --rc genhtml_branch_coverage=1 00:10:19.915 --rc genhtml_function_coverage=1 00:10:19.915 --rc genhtml_legend=1 00:10:19.915 --rc geninfo_all_blocks=1 00:10:19.915 --rc geninfo_unexecuted_blocks=1 00:10:19.915 00:10:19.915 ' 00:10:19.915 00:32:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:20.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:20.489 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:20.489 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:20.489 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:20.489 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:20.489 00:32:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:20.489 00:32:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:20.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:21.012 Waiting for block devices as requested 00:10:21.012 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.273 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.273 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:26.563 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:26.563 00:33:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:26.563 00:33:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.824 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:26.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.824 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:27.085 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:27.347 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.347 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78648 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:27.609 00:33:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:27.609 00:33:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:27.871 Initializing NVMe Controllers 00:10:27.871 Attaching to 0000:00:10.0 00:10:27.871 Attaching to 0000:00:11.0 00:10:27.871 Attached to 0000:00:11.0 00:10:27.871 Attached to 0000:00:10.0 00:10:27.871 Initialization complete. Starting I/O... 00:10:27.871 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:27.871 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:27.871 00:10:28.832 QEMU NVMe Ctrl (12341 ): 2492 I/Os completed (+2492) 00:10:28.832 QEMU NVMe Ctrl (12340 ): 2496 I/Os completed (+2496) 00:10:28.832 00:10:29.811 QEMU NVMe Ctrl (12341 ): 6167 I/Os completed (+3675) 00:10:29.811 QEMU NVMe Ctrl (12340 ): 6092 I/Os completed (+3596) 00:10:29.811 00:10:30.753 QEMU NVMe Ctrl (12341 ): 10446 I/Os completed (+4279) 00:10:30.753 QEMU NVMe Ctrl (12340 ): 10246 I/Os completed (+4154) 00:10:30.753 00:10:31.697 QEMU NVMe Ctrl (12341 ): 14783 I/Os completed (+4337) 00:10:31.697 QEMU NVMe Ctrl (12340 ): 14576 I/Os completed (+4330) 00:10:31.697 00:10:33.083 QEMU NVMe Ctrl (12341 ): 17579 I/Os completed (+2796) 00:10:33.083 QEMU NVMe Ctrl (12340 ): 17375 I/Os completed (+2799) 00:10:33.083 00:10:33.655 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:33.655 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.655 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.656 [2024-11-27 00:33:10.271567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:33.656 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:33.656 [2024-11-27 00:33:10.273098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.273234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.273272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.273308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:33.656 [2024-11-27 00:33:10.274902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.274967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.274984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.275001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:33.656 [2024-11-27 00:33:10.295263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:33.656 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:33.656 [2024-11-27 00:33:10.296537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.296588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.296606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.296621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:33.656 [2024-11-27 00:33:10.297959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.298007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.298026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 [2024-11-27 00:33:10.298038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.656 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:33.918 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:33.918 Attaching to 0000:00:10.0 00:10:33.918 Attached to 0000:00:10.0 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:33.918 00:33:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:33.918 Attaching to 0000:00:11.0 00:10:33.918 Attached to 0000:00:11.0 00:10:34.864 QEMU NVMe Ctrl (12340 ): 2964 I/Os completed (+2964) 00:10:34.864 QEMU NVMe Ctrl (12341 ): 2730 I/Os completed (+2730) 00:10:34.864 00:10:35.809 QEMU NVMe Ctrl (12340 ): 6160 I/Os completed (+3196) 00:10:35.809 QEMU NVMe Ctrl (12341 ): 5926 I/Os completed (+3196) 00:10:35.809 00:10:36.754 QEMU NVMe Ctrl (12340 ): 9184 I/Os completed (+3024) 00:10:36.754 QEMU NVMe Ctrl (12341 ): 8956 I/Os completed (+3030) 00:10:36.754 00:10:37.699 QEMU NVMe Ctrl (12340 ): 12168 I/Os completed (+2984) 00:10:37.699 QEMU NVMe Ctrl (12341 ): 11932 I/Os completed (+2976) 00:10:37.699 00:10:39.088 QEMU NVMe Ctrl (12340 ): 14692 I/Os completed (+2524) 00:10:39.088 QEMU NVMe Ctrl (12341 ): 14499 I/Os completed (+2567) 00:10:39.088 00:10:40.033 QEMU NVMe Ctrl (12340 ): 17644 I/Os completed (+2952) 00:10:40.033 QEMU NVMe Ctrl (12341 ): 17451 I/Os completed (+2952) 00:10:40.033 00:10:40.972 QEMU NVMe Ctrl (12340 ): 20437 I/Os completed (+2793) 00:10:40.972 QEMU NVMe Ctrl (12341 ): 20248 I/Os completed (+2797) 00:10:40.972 00:10:41.915 QEMU NVMe Ctrl (12340 ): 23694 I/Os completed (+3257) 00:10:41.915 QEMU NVMe Ctrl (12341 ): 23520 I/Os completed (+3272) 00:10:41.915 00:10:42.857 QEMU NVMe Ctrl (12340 ): 26321 I/Os completed (+2627) 00:10:42.857 QEMU NVMe Ctrl (12341 ): 26178 I/Os completed (+2658) 00:10:42.857 00:10:43.804 QEMU NVMe Ctrl (12340 ): 28877 I/Os completed (+2556) 00:10:43.804 QEMU NVMe Ctrl (12341 ): 28748 I/Os completed (+2570) 00:10:43.804 00:10:44.747 QEMU NVMe Ctrl (12340 ): 31445 I/Os completed (+2568) 00:10:44.747 QEMU NVMe Ctrl (12341 ): 31339 I/Os completed (+2591) 00:10:44.747 00:10:45.688 QEMU NVMe Ctrl (12340 ): 34161 I/Os completed (+2716) 00:10:45.688 QEMU NVMe Ctrl (12341 ): 34059 I/Os completed (+2720) 00:10:45.688 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.950 [2024-11-27 00:33:22.609270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:45.950 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:45.950 [2024-11-27 00:33:22.610896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.611081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.611126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.611219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:45.950 [2024-11-27 00:33:22.614182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.614341] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.614381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.614463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.950 [2024-11-27 00:33:22.629528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.950 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:45.950 [2024-11-27 00:33:22.630992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.631039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.631059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.631074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:45.950 [2024-11-27 00:33:22.632441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.632488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.632509] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 [2024-11-27 00:33:22.632523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:45.950 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.211 Attaching to 0000:00:10.0 00:10:46.211 Attached to 0000:00:10.0 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.211 00:33:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:46.211 Attaching to 0000:00:11.0 00:10:46.211 Attached to 0000:00:11.0 00:10:46.783 QEMU NVMe Ctrl (12340 ): 1652 I/Os completed (+1652) 00:10:46.783 QEMU NVMe Ctrl (12341 ): 1467 I/Os completed (+1467) 00:10:46.783 00:10:47.743 QEMU NVMe Ctrl (12340 ): 4852 I/Os completed (+3200) 00:10:47.743 QEMU NVMe Ctrl (12341 ): 4673 I/Os completed (+3206) 00:10:47.743 00:10:49.128 QEMU NVMe Ctrl (12340 ): 8654 I/Os completed (+3802) 00:10:49.128 QEMU NVMe Ctrl (12341 ): 8428 I/Os completed (+3755) 00:10:49.128 00:10:49.697 QEMU NVMe Ctrl (12340 ): 12266 I/Os completed (+3612) 00:10:49.698 QEMU NVMe Ctrl (12341 ): 12049 I/Os completed (+3621) 00:10:49.698 00:10:51.077 QEMU NVMe Ctrl (12340 ): 16049 I/Os completed (+3783) 00:10:51.077 QEMU NVMe Ctrl (12341 ): 15823 I/Os completed (+3774) 00:10:51.077 00:10:52.020 QEMU NVMe Ctrl (12340 ): 19249 I/Os completed (+3200) 00:10:52.020 QEMU NVMe Ctrl (12341 ): 19032 I/Os completed (+3209) 00:10:52.020 00:10:52.964 QEMU NVMe Ctrl (12340 ): 22969 I/Os completed (+3720) 00:10:52.964 QEMU NVMe Ctrl (12341 ): 22755 I/Os completed (+3723) 00:10:52.964 00:10:53.903 QEMU NVMe Ctrl (12340 ): 27351 I/Os completed (+4382) 00:10:53.903 QEMU NVMe Ctrl (12341 ): 27206 I/Os completed (+4451) 00:10:53.903 00:10:54.847 QEMU NVMe Ctrl (12340 ): 31555 I/Os completed (+4204) 00:10:54.847 QEMU NVMe Ctrl (12341 ): 31327 I/Os completed (+4121) 00:10:54.847 00:10:55.783 QEMU NVMe Ctrl (12340 ): 35841 I/Os completed (+4286) 00:10:55.783 QEMU NVMe Ctrl (12341 ): 35558 I/Os completed (+4231) 00:10:55.783 00:10:56.717 QEMU NVMe Ctrl (12340 ): 39482 I/Os completed (+3641) 00:10:56.717 QEMU NVMe Ctrl (12341 ): 39251 I/Os completed (+3693) 00:10:56.717 00:10:58.090 QEMU NVMe Ctrl (12340 ): 43178 I/Os completed (+3696) 00:10:58.090 QEMU NVMe Ctrl (12341 ): 42951 I/Os completed (+3700) 00:10:58.090 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.348 [2024-11-27 00:33:34.939113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.348 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:58.348 [2024-11-27 00:33:34.940271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.940391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.940425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.940540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:58.348 [2024-11-27 00:33:34.941797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.941872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.941905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.941984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.348 EAL: Scan for (pci) bus failed. 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.348 [2024-11-27 00:33:34.959140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.348 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:58.348 [2024-11-27 00:33:34.962012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.962170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.962240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.962270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:58.348 [2024-11-27 00:33:34.963684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.963733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.963753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 [2024-11-27 00:33:34.963766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:58.348 00:33:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.349 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.349 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.349 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:58.608 Attaching to 0000:00:10.0 00:10:58.608 Attached to 0000:00:10.0 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.608 00:33:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.608 Attaching to 0000:00:11.0 00:10:58.608 Attached to 0000:00:11.0 00:10:58.608 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:58.608 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:58.608 [2024-11-27 00:33:35.238602] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:10.879 00:33:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:10.879 00:33:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.879 00:33:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.97 00:11:10.879 00:33:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.97 00:11:10.879 00:33:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:10.879 00:33:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.97 00:11:10.879 00:33:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.97 2 00:11:10.879 remove_attach_helper took 42.97s to complete (handling 2 nvme drive(s)) 00:33:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78648 00:11:17.468 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78648) - No such process 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78648 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79191 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79191 00:11:17.468 00:33:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79191 ']' 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.468 00:33:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.468 [2024-11-27 00:33:53.335782] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:11:17.468 [2024-11-27 00:33:53.336117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79191 ] 00:11:17.468 [2024-11-27 00:33:53.499778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.468 [2024-11-27 00:33:53.528216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:17.468 00:33:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:17.468 00:33:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.056 [2024-11-27 00:34:00.310809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.056 [2024-11-27 00:34:00.311884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.056 [2024-11-27 00:34:00.311913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.056 [2024-11-27 00:34:00.311929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.056 [2024-11-27 00:34:00.311941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.056 [2024-11-27 00:34:00.311951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.056 [2024-11-27 00:34:00.311958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.056 [2024-11-27 00:34:00.311968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.056 [2024-11-27 00:34:00.311977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.056 [2024-11-27 00:34:00.311985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.056 [2024-11-27 00:34:00.311992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.056 [2024-11-27 00:34:00.311999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.056 [2024-11-27 00:34:00.312006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.056 00:34:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:24.056 00:34:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.317 [2024-11-27 00:34:00.910812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:24.317 [2024-11-27 00:34:00.911823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.318 [2024-11-27 00:34:00.911862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.318 [2024-11-27 00:34:00.911872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.318 [2024-11-27 00:34:00.911884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.318 [2024-11-27 00:34:00.911891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.318 [2024-11-27 00:34:00.911899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.318 [2024-11-27 00:34:00.911905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.318 [2024-11-27 00:34:00.911913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.318 [2024-11-27 00:34:00.911919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.318 [2024-11-27 00:34:00.911928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.318 [2024-11-27 00:34:00.911934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.318 [2024-11-27 00:34:00.911941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.579 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.579 00:34:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.579 00:34:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.579 00:34:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.844 00:34:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.083 00:34:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.083 00:34:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.083 00:34:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:37.083 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.084 00:34:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.084 00:34:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.084 00:34:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.084 [2024-11-27 00:34:13.711009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:37.084 [2024-11-27 00:34:13.712167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.084 [2024-11-27 00:34:13.712197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.084 [2024-11-27 00:34:13.712208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.084 [2024-11-27 00:34:13.712220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.084 [2024-11-27 00:34:13.712228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.084 [2024-11-27 00:34:13.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.084 [2024-11-27 00:34:13.712262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.084 [2024-11-27 00:34:13.712268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.084 [2024-11-27 00:34:13.712276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.084 [2024-11-27 00:34:13.712282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.084 [2024-11-27 00:34:13.712290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.084 [2024-11-27 00:34:13.712296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:37.084 00:34:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.657 00:34:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.657 00:34:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.657 00:34:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:37.657 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:37.657 [2024-11-27 00:34:14.311018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.657 [2024-11-27 00:34:14.312004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.657 [2024-11-27 00:34:14.312033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.657 [2024-11-27 00:34:14.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.657 [2024-11-27 00:34:14.312054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.657 [2024-11-27 00:34:14.312061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.657 [2024-11-27 00:34:14.312069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.657 [2024-11-27 00:34:14.312076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.657 [2024-11-27 00:34:14.312084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.657 [2024-11-27 00:34:14.312090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.657 [2024-11-27 00:34:14.312098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.657 [2024-11-27 00:34:14.312104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.657 [2024-11-27 00:34:14.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.229 00:34:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.229 00:34:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.229 00:34:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.229 00:34:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:38.229 00:34:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:38.491 00:34:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.491 00:34:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 [2024-11-27 00:34:27.111235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:50.726 [2024-11-27 00:34:27.112361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.726 [2024-11-27 00:34:27.112391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.726 [2024-11-27 00:34:27.112404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.726 [2024-11-27 00:34:27.112416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.726 [2024-11-27 00:34:27.112425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.726 [2024-11-27 00:34:27.112432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.726 [2024-11-27 00:34:27.112439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.726 [2024-11-27 00:34:27.112446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.726 [2024-11-27 00:34:27.112453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.726 [2024-11-27 00:34:27.112459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:50.726 [2024-11-27 00:34:27.112467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:50.726 [2024-11-27 00:34:27.112473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.726 00:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:50.726 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.987 00:34:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.987 00:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.987 00:34:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:50.987 00:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.248 [2024-11-27 00:34:27.811243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.248 [2024-11-27 00:34:27.812212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.248 [2024-11-27 00:34:27.812241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.248 [2024-11-27 00:34:27.812250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.248 [2024-11-27 00:34:27.812261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.248 [2024-11-27 00:34:27.812268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.248 [2024-11-27 00:34:27.812278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.248 [2024-11-27 00:34:27.812284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.248 [2024-11-27 00:34:27.812292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.248 [2024-11-27 00:34:27.812298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.248 [2024-11-27 00:34:27.812305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.248 [2024-11-27 00:34:27.812311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.248 [2024-11-27 00:34:27.812319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.509 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:51.509 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.510 00:34:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.510 00:34:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.510 00:34:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.510 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:51.770 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.771 00:34:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.26 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.26 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.26 00:12:04.003 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.26 2 00:12:04.003 remove_attach_helper took 46.26s to complete (handling 2 nvme drive(s)) 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.003 00:34:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:04.004 00:34:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:04.004 00:34:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.591 00:34:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.591 00:34:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.591 00:34:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:10.591 00:34:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.591 [2024-11-27 00:34:46.598833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.591 [2024-11-27 00:34:46.599657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.599679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.599691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.599702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.599710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.599717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.599725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.599731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.599742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.599749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.599756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.599762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.998836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.591 [2024-11-27 00:34:46.999662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.999693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.999703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.999714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.999722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.999731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.999738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.999746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.999753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.591 [2024-11-27 00:34:46.999761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.591 [2024-11-27 00:34:46.999768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.591 [2024-11-27 00:34:46.999779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.592 00:34:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.592 00:34:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.592 00:34:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.592 00:34:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.826 00:34:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:22.826 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.826 [2024-11-27 00:34:59.499108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:22.826 [2024-11-27 00:34:59.500126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.826 [2024-11-27 00:34:59.500146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.826 [2024-11-27 00:34:59.500157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.826 [2024-11-27 00:34:59.500168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.826 [2024-11-27 00:34:59.500177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.826 [2024-11-27 00:34:59.500183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.826 [2024-11-27 00:34:59.500191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.826 [2024-11-27 00:34:59.500198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.826 [2024-11-27 00:34:59.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.826 [2024-11-27 00:34:59.500212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.826 [2024-11-27 00:34:59.500220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.826 [2024-11-27 00:34:59.500226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.399 [2024-11-27 00:34:59.899117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:23.399 [2024-11-27 00:34:59.901531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.399 [2024-11-27 00:34:59.901564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.399 [2024-11-27 00:34:59.901583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.399 [2024-11-27 00:34:59.901594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.399 [2024-11-27 00:34:59.901601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.399 [2024-11-27 00:34:59.901611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.399 [2024-11-27 00:34:59.901617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.399 [2024-11-27 00:34:59.901626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.399 [2024-11-27 00:34:59.901633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.399 [2024-11-27 00:34:59.901640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.399 [2024-11-27 00:34:59.901647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.399 [2024-11-27 00:34:59.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.399 00:34:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.399 00:34:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.399 00:34:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.399 00:34:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.399 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:23.660 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:23.661 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.661 00:35:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:35.895 [2024-11-27 00:35:12.299385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:35.895 [2024-11-27 00:35:12.300399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.895 [2024-11-27 00:35:12.300502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.895 [2024-11-27 00:35:12.300570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.895 [2024-11-27 00:35:12.300705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.895 [2024-11-27 00:35:12.300746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.895 [2024-11-27 00:35:12.300828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.895 [2024-11-27 00:35:12.300868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.895 [2024-11-27 00:35:12.300887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.895 [2024-11-27 00:35:12.300939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.895 [2024-11-27 00:35:12.300964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:35.895 [2024-11-27 00:35:12.300983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.895 [2024-11-27 00:35:12.301027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.895 00:35:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:35.895 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:36.157 [2024-11-27 00:35:12.699386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:36.157 [2024-11-27 00:35:12.700297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.157 [2024-11-27 00:35:12.700405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.157 [2024-11-27 00:35:12.700418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.157 [2024-11-27 00:35:12.700429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.157 [2024-11-27 00:35:12.700436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.157 [2024-11-27 00:35:12.700444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.157 [2024-11-27 00:35:12.700451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.157 [2024-11-27 00:35:12.700460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.157 [2024-11-27 00:35:12.700467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.157 [2024-11-27 00:35:12.700475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.157 [2024-11-27 00:35:12.700482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:36.157 [2024-11-27 00:35:12.700489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.157 00:35:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.157 00:35:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.157 00:35:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:36.157 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:36.418 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:36.418 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:36.418 00:35:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:36.418 00:35:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:48.724 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:48.724 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.67 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.67 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:12:48.725 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79191 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79191 ']' 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79191 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79191 00:12:48.725 killing process with pid 79191 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79191' 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79191 00:12:48.725 00:35:25 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79191 00:12:48.725 00:35:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:49.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.559 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.559 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:49.559 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:49.821 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:49.821 00:12:49.821 real 2m29.939s 00:12:49.821 user 1m50.548s 00:12:49.821 sys 0m17.937s 00:12:49.821 00:35:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.821 ************************************ 00:12:49.821 END TEST sw_hotplug 00:12:49.821 ************************************ 00:12:49.821 00:35:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:49.821 00:35:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:49.821 00:35:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:49.821 00:35:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.821 00:35:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.821 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:12:49.821 ************************************ 00:12:49.821 START TEST nvme_xnvme 00:12:49.821 ************************************ 00:12:49.821 00:35:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:50.084 * Looking for test storage... 00:12:50.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.084 00:35:26 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.084 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.084 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.084 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.084 00:35:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.085 --rc genhtml_branch_coverage=1 00:12:50.085 --rc genhtml_function_coverage=1 00:12:50.085 --rc genhtml_legend=1 00:12:50.085 --rc geninfo_all_blocks=1 00:12:50.085 --rc geninfo_unexecuted_blocks=1 00:12:50.085 00:12:50.085 ' 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.085 --rc genhtml_branch_coverage=1 00:12:50.085 --rc genhtml_function_coverage=1 00:12:50.085 --rc genhtml_legend=1 00:12:50.085 --rc geninfo_all_blocks=1 00:12:50.085 --rc geninfo_unexecuted_blocks=1 00:12:50.085 00:12:50.085 ' 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.085 --rc genhtml_branch_coverage=1 00:12:50.085 --rc genhtml_function_coverage=1 00:12:50.085 --rc genhtml_legend=1 00:12:50.085 --rc geninfo_all_blocks=1 00:12:50.085 --rc geninfo_unexecuted_blocks=1 00:12:50.085 00:12:50.085 ' 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.085 --rc genhtml_branch_coverage=1 00:12:50.085 --rc genhtml_function_coverage=1 00:12:50.085 --rc genhtml_legend=1 00:12:50.085 --rc geninfo_all_blocks=1 00:12:50.085 --rc geninfo_unexecuted_blocks=1 00:12:50.085 00:12:50.085 ' 00:12:50.085 00:35:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:50.085 00:35:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:50.085 00:35:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:50.085 00:35:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:50.085 00:35:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:50.086 #define SPDK_CONFIG_H 00:12:50.086 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:50.086 #define SPDK_CONFIG_APPS 1 00:12:50.086 #define SPDK_CONFIG_ARCH native 00:12:50.086 #define SPDK_CONFIG_ASAN 1 00:12:50.086 #undef SPDK_CONFIG_AVAHI 00:12:50.086 #undef SPDK_CONFIG_CET 00:12:50.086 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:50.086 #define SPDK_CONFIG_COVERAGE 1 00:12:50.086 #define SPDK_CONFIG_CROSS_PREFIX 00:12:50.086 #undef SPDK_CONFIG_CRYPTO 00:12:50.086 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:50.086 #undef SPDK_CONFIG_CUSTOMOCF 00:12:50.086 #undef SPDK_CONFIG_DAOS 00:12:50.086 #define SPDK_CONFIG_DAOS_DIR 00:12:50.086 #define SPDK_CONFIG_DEBUG 1 00:12:50.086 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:50.086 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:50.086 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:50.086 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:50.086 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:50.086 #undef SPDK_CONFIG_DPDK_UADK 00:12:50.086 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:50.086 #define SPDK_CONFIG_EXAMPLES 1 00:12:50.086 #undef SPDK_CONFIG_FC 00:12:50.086 #define SPDK_CONFIG_FC_PATH 00:12:50.086 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:50.086 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:50.086 #define SPDK_CONFIG_FSDEV 1 00:12:50.086 #undef SPDK_CONFIG_FUSE 00:12:50.086 #undef SPDK_CONFIG_FUZZER 00:12:50.086 #define SPDK_CONFIG_FUZZER_LIB 00:12:50.086 #undef SPDK_CONFIG_GOLANG 00:12:50.086 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:50.086 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:50.086 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:50.086 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:50.086 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:50.086 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:50.086 #undef SPDK_CONFIG_HAVE_LZ4 00:12:50.086 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:50.086 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:50.086 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:50.086 #define SPDK_CONFIG_IDXD 1 00:12:50.086 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:50.086 #undef SPDK_CONFIG_IPSEC_MB 00:12:50.086 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:50.086 #define SPDK_CONFIG_ISAL 1 00:12:50.086 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:50.086 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:50.086 #define SPDK_CONFIG_LIBDIR 00:12:50.086 #undef SPDK_CONFIG_LTO 00:12:50.086 #define SPDK_CONFIG_MAX_LCORES 128 00:12:50.086 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:50.086 #define SPDK_CONFIG_NVME_CUSE 1 00:12:50.086 #undef SPDK_CONFIG_OCF 00:12:50.086 #define SPDK_CONFIG_OCF_PATH 00:12:50.086 #define SPDK_CONFIG_OPENSSL_PATH 00:12:50.086 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:50.086 #define SPDK_CONFIG_PGO_DIR 00:12:50.086 #undef SPDK_CONFIG_PGO_USE 00:12:50.086 #define SPDK_CONFIG_PREFIX /usr/local 00:12:50.086 #undef SPDK_CONFIG_RAID5F 00:12:50.086 #undef SPDK_CONFIG_RBD 00:12:50.086 #define SPDK_CONFIG_RDMA 1 00:12:50.086 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:50.086 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:50.086 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:50.086 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:50.086 #define SPDK_CONFIG_SHARED 1 00:12:50.086 #undef SPDK_CONFIG_SMA 00:12:50.086 #define SPDK_CONFIG_TESTS 1 00:12:50.086 #undef SPDK_CONFIG_TSAN 00:12:50.086 #define SPDK_CONFIG_UBLK 1 00:12:50.086 #define SPDK_CONFIG_UBSAN 1 00:12:50.086 #undef SPDK_CONFIG_UNIT_TESTS 00:12:50.086 #undef SPDK_CONFIG_URING 00:12:50.086 #define SPDK_CONFIG_URING_PATH 00:12:50.086 #undef SPDK_CONFIG_URING_ZNS 00:12:50.086 #undef SPDK_CONFIG_USDT 00:12:50.086 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:50.086 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:50.086 #undef SPDK_CONFIG_VFIO_USER 00:12:50.086 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:50.086 #define SPDK_CONFIG_VHOST 1 00:12:50.086 #define SPDK_CONFIG_VIRTIO 1 00:12:50.086 #undef SPDK_CONFIG_VTUNE 00:12:50.086 #define SPDK_CONFIG_VTUNE_DIR 00:12:50.086 #define SPDK_CONFIG_WERROR 1 00:12:50.086 #define SPDK_CONFIG_WPDK_DIR 00:12:50.086 #define SPDK_CONFIG_XNVME 1 00:12:50.086 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:50.086 00:35:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.086 00:35:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.086 00:35:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.086 00:35:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.086 00:35:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.086 00:35:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.086 00:35:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.086 00:35:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.086 00:35:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:50.086 00:35:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:50.086 00:35:26 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:50.086 00:35:26 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:50.087 00:35:26 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80556 ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80556 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.H47mOu 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.H47mOu/tests/xnvme /tmp/spdk.H47mOu 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13250670592 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6333370368 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261960704 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13250670592 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6333370368 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265237504 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98502701056 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:50.088 00:35:26 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1200078848 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:50.089 * Looking for test storage... 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13250670592 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.089 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.351 --rc genhtml_branch_coverage=1 00:12:50.351 --rc genhtml_function_coverage=1 00:12:50.351 --rc genhtml_legend=1 00:12:50.351 --rc geninfo_all_blocks=1 00:12:50.351 --rc geninfo_unexecuted_blocks=1 00:12:50.351 00:12:50.351 ' 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.351 --rc genhtml_branch_coverage=1 00:12:50.351 --rc genhtml_function_coverage=1 00:12:50.351 --rc genhtml_legend=1 00:12:50.351 --rc geninfo_all_blocks=1 00:12:50.351 --rc geninfo_unexecuted_blocks=1 00:12:50.351 00:12:50.351 ' 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.351 --rc genhtml_branch_coverage=1 00:12:50.351 --rc genhtml_function_coverage=1 00:12:50.351 --rc genhtml_legend=1 00:12:50.351 --rc geninfo_all_blocks=1 00:12:50.351 --rc geninfo_unexecuted_blocks=1 00:12:50.351 00:12:50.351 ' 00:12:50.351 00:35:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.351 --rc genhtml_branch_coverage=1 00:12:50.351 --rc genhtml_function_coverage=1 00:12:50.351 --rc genhtml_legend=1 00:12:50.351 --rc geninfo_all_blocks=1 00:12:50.351 --rc geninfo_unexecuted_blocks=1 00:12:50.351 00:12:50.351 ' 00:12:50.351 00:35:26 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.351 00:35:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.351 00:35:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.351 00:35:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.352 00:35:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.352 00:35:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:50.352 00:35:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:50.352 00:35:26 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:50.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.613 Waiting for block devices as requested 00:12:50.888 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.888 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:50.888 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.150 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:56.443 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:56.443 00:35:32 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:56.443 00:35:33 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:56.443 00:35:33 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:56.705 00:35:33 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:56.705 No valid GPT data, bailing 00:12:56.705 00:35:33 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:56.705 00:35:33 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:56.705 00:35:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:56.705 00:35:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.705 00:35:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.705 00:35:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.705 ************************************ 00:12:56.705 START TEST xnvme_rpc 00:12:56.705 ************************************ 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:56.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80942 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80942 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80942 ']' 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.705 00:35:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.705 [2024-11-27 00:35:33.483221] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:12:56.705 [2024-11-27 00:35:33.483373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80942 ] 00:12:56.967 [2024-11-27 00:35:33.649115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.967 [2024-11-27 00:35:33.678305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 xnvme_bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80942 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80942 ']' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80942 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80942 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.912 killing process with pid 80942 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80942' 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80942 00:12:57.912 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80942 00:12:58.174 00:12:58.174 real 0m1.455s 00:12:58.174 user 0m1.489s 00:12:58.174 sys 0m0.447s 00:12:58.174 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.174 00:35:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.174 ************************************ 00:12:58.174 END TEST xnvme_rpc 00:12:58.174 ************************************ 00:12:58.174 00:35:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:58.174 00:35:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.174 00:35:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.174 00:35:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.174 ************************************ 00:12:58.174 START TEST xnvme_bdevperf 00:12:58.174 ************************************ 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:58.174 00:35:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.174 { 00:12:58.174 "subsystems": [ 00:12:58.174 { 00:12:58.174 "subsystem": "bdev", 00:12:58.174 "config": [ 00:12:58.174 { 00:12:58.174 "params": { 00:12:58.174 "io_mechanism": "libaio", 00:12:58.174 "conserve_cpu": false, 00:12:58.174 "filename": "/dev/nvme0n1", 00:12:58.174 "name": "xnvme_bdev" 00:12:58.174 }, 00:12:58.174 "method": "bdev_xnvme_create" 00:12:58.174 }, 00:12:58.174 { 00:12:58.174 "method": "bdev_wait_for_examine" 00:12:58.174 } 00:12:58.174 ] 00:12:58.174 } 00:12:58.174 ] 00:12:58.174 } 00:12:58.436 [2024-11-27 00:35:34.994172] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:12:58.436 [2024-11-27 00:35:34.994319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81000 ] 00:12:58.436 [2024-11-27 00:35:35.157035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.436 [2024-11-27 00:35:35.186499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.699 Running I/O for 5 seconds... 00:13:00.591 24807.00 IOPS, 96.90 MiB/s [2024-11-27T00:35:38.323Z] 26471.50 IOPS, 103.40 MiB/s [2024-11-27T00:35:39.710Z] 25492.00 IOPS, 99.58 MiB/s [2024-11-27T00:35:40.655Z] 25041.25 IOPS, 97.82 MiB/s 00:13:03.868 Latency(us) 00:13:03.868 [2024-11-27T00:35:40.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.868 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:03.868 xnvme_bdev : 5.00 24807.14 96.90 0.00 0.00 2574.04 275.69 6956.90 00:13:03.868 [2024-11-27T00:35:40.655Z] =================================================================================================================== 00:13:03.868 [2024-11-27T00:35:40.655Z] Total : 24807.14 96.90 0.00 0.00 2574.04 275.69 6956.90 00:13:03.868 00:35:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.868 00:35:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:03.868 00:35:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:03.868 00:35:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.868 00:35:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.868 { 00:13:03.868 "subsystems": [ 00:13:03.868 { 00:13:03.868 "subsystem": "bdev", 00:13:03.868 "config": [ 00:13:03.868 { 00:13:03.868 "params": { 00:13:03.868 "io_mechanism": "libaio", 00:13:03.868 "conserve_cpu": false, 00:13:03.868 "filename": "/dev/nvme0n1", 00:13:03.868 "name": "xnvme_bdev" 00:13:03.868 }, 00:13:03.868 "method": "bdev_xnvme_create" 00:13:03.868 }, 00:13:03.868 { 00:13:03.868 "method": "bdev_wait_for_examine" 00:13:03.868 } 00:13:03.868 ] 00:13:03.868 } 00:13:03.868 ] 00:13:03.868 } 00:13:03.868 [2024-11-27 00:35:40.580977] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:03.868 [2024-11-27 00:35:40.581166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81064 ] 00:13:04.130 [2024-11-27 00:35:40.744380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.130 [2024-11-27 00:35:40.772949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.130 Running I/O for 5 seconds... 00:13:06.451 31348.00 IOPS, 122.45 MiB/s [2024-11-27T00:35:44.184Z] 32343.50 IOPS, 126.34 MiB/s [2024-11-27T00:35:45.128Z] 31365.33 IOPS, 122.52 MiB/s [2024-11-27T00:35:46.071Z] 30731.75 IOPS, 120.05 MiB/s 00:13:09.284 Latency(us) 00:13:09.284 [2024-11-27T00:35:46.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.285 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:09.285 xnvme_bdev : 5.00 30482.44 119.07 0.00 0.00 2095.06 456.86 7864.32 00:13:09.285 [2024-11-27T00:35:46.072Z] =================================================================================================================== 00:13:09.285 [2024-11-27T00:35:46.072Z] Total : 30482.44 119.07 0.00 0.00 2095.06 456.86 7864.32 00:13:09.546 00:13:09.546 real 0m11.169s 00:13:09.546 user 0m3.019s 00:13:09.546 sys 0m6.666s 00:13:09.546 00:35:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.546 00:35:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:09.546 ************************************ 00:13:09.546 END TEST xnvme_bdevperf 00:13:09.546 ************************************ 00:13:09.546 00:35:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:09.546 00:35:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.546 00:35:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.546 00:35:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.546 ************************************ 00:13:09.546 START TEST xnvme_fio_plugin 00:13:09.546 ************************************ 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.546 00:35:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.546 { 00:13:09.546 "subsystems": [ 00:13:09.546 { 00:13:09.546 "subsystem": "bdev", 00:13:09.546 "config": [ 00:13:09.546 { 00:13:09.546 "params": { 00:13:09.546 "io_mechanism": "libaio", 00:13:09.546 "conserve_cpu": false, 00:13:09.546 "filename": "/dev/nvme0n1", 00:13:09.546 "name": "xnvme_bdev" 00:13:09.546 }, 00:13:09.546 "method": "bdev_xnvme_create" 00:13:09.546 }, 00:13:09.546 { 00:13:09.546 "method": "bdev_wait_for_examine" 00:13:09.546 } 00:13:09.546 ] 00:13:09.546 } 00:13:09.546 ] 00:13:09.546 } 00:13:09.808 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.808 fio-3.35 00:13:09.808 Starting 1 thread 00:13:15.182 00:13:15.182 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81173: Wed Nov 27 00:35:51 2024 00:13:15.182 read: IOPS=30.8k, BW=120MiB/s (126MB/s)(602MiB/5007msec) 00:13:15.182 slat (usec): min=4, max=4584, avg=24.53, stdev=104.14 00:13:15.182 clat (usec): min=106, max=11895, avg=1413.78, stdev=564.26 00:13:15.182 lat (usec): min=185, max=11901, avg=1438.32, stdev=553.64 00:13:15.182 clat percentiles (usec): 00:13:15.182 | 1.00th=[ 281], 5.00th=[ 562], 10.00th=[ 734], 20.00th=[ 971], 00:13:15.182 | 30.00th=[ 1139], 40.00th=[ 1270], 50.00th=[ 1385], 60.00th=[ 1516], 00:13:15.182 | 70.00th=[ 1647], 80.00th=[ 1811], 90.00th=[ 2057], 95.00th=[ 2311], 00:13:15.182 | 99.00th=[ 3097], 99.50th=[ 3490], 99.90th=[ 4555], 99.95th=[ 5932], 00:13:15.182 | 99.99th=[ 7308] 00:13:15.182 bw ( KiB/s): min=114443, max=132952, per=100.00%, avg=123165.90, stdev=5484.54, samples=10 00:13:15.182 iops : min=28610, max=33238, avg=30791.40, stdev=1371.27, samples=10 00:13:15.182 lat (usec) : 250=0.67%, 500=3.14%, 750=6.75%, 1000=11.10% 00:13:15.182 lat (msec) : 2=66.46%, 4=11.66%, 10=0.20%, 20=0.01% 00:13:15.182 cpu : usr=36.74%, sys=54.45%, ctx=13, majf=0, minf=773 00:13:15.182 IO depths : 1=0.4%, 2=1.1%, 4=2.8%, 8=8.0%, 16=22.9%, 32=62.7%, >=64=2.1% 00:13:15.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.182 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:15.182 issued rwts: total=154019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:15.182 00:13:15.182 Run status group 0 (all jobs): 00:13:15.182 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=602MiB (631MB), run=5007-5007msec 00:13:15.444 ----------------------------------------------------- 00:13:15.444 Suppressions used: 00:13:15.444 count bytes template 00:13:15.444 1 11 /usr/src/fio/parse.c 00:13:15.444 1 8 libtcmalloc_minimal.so 00:13:15.444 1 904 libcrypto.so 00:13:15.444 ----------------------------------------------------- 00:13:15.444 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:15.706 00:35:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:15.706 { 00:13:15.706 "subsystems": [ 00:13:15.706 { 00:13:15.706 "subsystem": "bdev", 00:13:15.706 "config": [ 00:13:15.706 { 00:13:15.706 "params": { 00:13:15.706 "io_mechanism": "libaio", 00:13:15.706 "conserve_cpu": false, 00:13:15.706 "filename": "/dev/nvme0n1", 00:13:15.706 "name": "xnvme_bdev" 00:13:15.706 }, 00:13:15.706 "method": "bdev_xnvme_create" 00:13:15.706 }, 00:13:15.706 { 00:13:15.706 "method": "bdev_wait_for_examine" 00:13:15.706 } 00:13:15.706 ] 00:13:15.706 } 00:13:15.706 ] 00:13:15.706 } 00:13:15.706 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:15.706 fio-3.35 00:13:15.706 Starting 1 thread 00:13:22.297 00:13:22.297 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81259: Wed Nov 27 00:35:57 2024 00:13:22.297 write: IOPS=32.4k, BW=126MiB/s (133MB/s)(632MiB/5001msec); 0 zone resets 00:13:22.297 slat (usec): min=4, max=2024, avg=22.40, stdev=92.13 00:13:22.297 clat (usec): min=101, max=6183, avg=1363.11, stdev=547.80 00:13:22.297 lat (usec): min=200, max=6188, avg=1385.51, stdev=539.44 00:13:22.297 clat percentiles (usec): 00:13:22.297 | 1.00th=[ 285], 5.00th=[ 529], 10.00th=[ 693], 20.00th=[ 906], 00:13:22.297 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1336], 60.00th=[ 1467], 00:13:22.297 | 70.00th=[ 1598], 80.00th=[ 1762], 90.00th=[ 2024], 95.00th=[ 2278], 00:13:22.297 | 99.00th=[ 2999], 99.50th=[ 3294], 99.90th=[ 3916], 99.95th=[ 4178], 00:13:22.297 | 99.99th=[ 4883] 00:13:22.297 bw ( KiB/s): min=109856, max=145152, per=98.78%, avg=127892.44, stdev=11078.63, samples=9 00:13:22.297 iops : min=27464, max=36288, avg=31973.11, stdev=2769.66, samples=9 00:13:22.297 lat (usec) : 250=0.63%, 500=3.73%, 750=7.97%, 1000=13.26% 00:13:22.297 lat (msec) : 2=63.85%, 4=10.48%, 10=0.08% 00:13:22.297 cpu : usr=39.76%, sys=50.40%, ctx=16, majf=0, minf=773 00:13:22.297 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.5%, 16=23.1%, 32=61.5%, >=64=2.1% 00:13:22.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.297 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:22.297 issued rwts: total=0,161878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.297 00:13:22.297 Run status group 0 (all jobs): 00:13:22.297 WRITE: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=632MiB (663MB), run=5001-5001msec 00:13:22.297 ----------------------------------------------------- 00:13:22.297 Suppressions used: 00:13:22.297 count bytes template 00:13:22.297 1 11 /usr/src/fio/parse.c 00:13:22.297 1 8 libtcmalloc_minimal.so 00:13:22.297 1 904 libcrypto.so 00:13:22.297 ----------------------------------------------------- 00:13:22.297 00:13:22.297 ************************************ 00:13:22.297 END TEST xnvme_fio_plugin 00:13:22.297 ************************************ 00:13:22.297 00:13:22.297 real 0m12.138s 00:13:22.297 user 0m5.006s 00:13:22.297 sys 0m5.826s 00:13:22.297 00:35:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.297 00:35:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.297 00:35:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:22.297 00:35:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:22.297 00:35:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:22.297 00:35:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:22.297 00:35:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.297 00:35:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.297 00:35:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.297 ************************************ 00:13:22.297 START TEST xnvme_rpc 00:13:22.297 ************************************ 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:22.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81341 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81341 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81341 ']' 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.297 00:35:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.297 [2024-11-27 00:35:58.449148] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:22.297 [2024-11-27 00:35:58.449294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81341 ] 00:13:22.297 [2024-11-27 00:35:58.612957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.297 [2024-11-27 00:35:58.641713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.557 xnvme_bdev 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.557 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.818 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81341 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81341 ']' 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81341 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81341 00:13:22.819 killing process with pid 81341 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81341' 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81341 00:13:22.819 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81341 00:13:23.079 ************************************ 00:13:23.079 END TEST xnvme_rpc 00:13:23.079 ************************************ 00:13:23.079 00:13:23.079 real 0m1.489s 00:13:23.079 user 0m1.533s 00:13:23.079 sys 0m0.423s 00:13:23.079 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.079 00:35:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.348 00:35:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:23.348 00:35:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:23.348 00:35:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.348 00:35:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.348 ************************************ 00:13:23.348 START TEST xnvme_bdevperf 00:13:23.348 ************************************ 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:23.348 00:35:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.348 { 00:13:23.348 "subsystems": [ 00:13:23.348 { 00:13:23.348 "subsystem": "bdev", 00:13:23.348 "config": [ 00:13:23.348 { 00:13:23.348 "params": { 00:13:23.348 "io_mechanism": "libaio", 00:13:23.348 "conserve_cpu": true, 00:13:23.348 "filename": "/dev/nvme0n1", 00:13:23.348 "name": "xnvme_bdev" 00:13:23.348 }, 00:13:23.348 "method": "bdev_xnvme_create" 00:13:23.348 }, 00:13:23.348 { 00:13:23.348 "method": "bdev_wait_for_examine" 00:13:23.348 } 00:13:23.348 ] 00:13:23.348 } 00:13:23.348 ] 00:13:23.348 } 00:13:23.348 [2024-11-27 00:36:00.001472] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:23.348 [2024-11-27 00:36:00.001836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81393 ] 00:13:23.615 [2024-11-27 00:36:00.164784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.615 [2024-11-27 00:36:00.194640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.615 Running I/O for 5 seconds... 00:13:25.946 29205.00 IOPS, 114.08 MiB/s [2024-11-27T00:36:03.677Z] 28162.50 IOPS, 110.01 MiB/s [2024-11-27T00:36:04.622Z] 27762.00 IOPS, 108.45 MiB/s [2024-11-27T00:36:05.566Z] 27246.00 IOPS, 106.43 MiB/s 00:13:28.779 Latency(us) 00:13:28.779 [2024-11-27T00:36:05.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.779 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:28.779 xnvme_bdev : 5.00 27145.96 106.04 0.00 0.00 2352.64 261.51 6377.16 00:13:28.779 [2024-11-27T00:36:05.566Z] =================================================================================================================== 00:13:28.779 [2024-11-27T00:36:05.566Z] Total : 27145.96 106.04 0.00 0.00 2352.64 261.51 6377.16 00:13:28.779 00:36:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:28.779 00:36:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:28.779 00:36:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:28.779 00:36:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:28.779 00:36:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:28.779 { 00:13:28.779 "subsystems": [ 00:13:28.779 { 00:13:28.779 "subsystem": "bdev", 00:13:28.779 "config": [ 00:13:28.779 { 00:13:28.779 "params": { 00:13:28.779 "io_mechanism": "libaio", 00:13:28.779 "conserve_cpu": true, 00:13:28.779 "filename": "/dev/nvme0n1", 00:13:28.779 "name": "xnvme_bdev" 00:13:28.779 }, 00:13:28.779 "method": "bdev_xnvme_create" 00:13:28.779 }, 00:13:28.779 { 00:13:28.779 "method": "bdev_wait_for_examine" 00:13:28.779 } 00:13:28.779 ] 00:13:28.779 } 00:13:28.779 ] 00:13:28.779 } 00:13:29.040 [2024-11-27 00:36:05.594462] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:29.040 [2024-11-27 00:36:05.594603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81463 ] 00:13:29.040 [2024-11-27 00:36:05.753483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.040 [2024-11-27 00:36:05.784272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.301 Running I/O for 5 seconds... 00:13:31.190 32516.00 IOPS, 127.02 MiB/s [2024-11-27T00:36:08.920Z] 30892.50 IOPS, 120.67 MiB/s [2024-11-27T00:36:10.304Z] 31466.67 IOPS, 122.92 MiB/s [2024-11-27T00:36:11.248Z] 31301.00 IOPS, 122.27 MiB/s 00:13:34.461 Latency(us) 00:13:34.461 [2024-11-27T00:36:11.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.461 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:34.461 xnvme_bdev : 5.00 31730.18 123.95 0.00 0.00 2012.24 482.07 10132.87 00:13:34.461 [2024-11-27T00:36:11.248Z] =================================================================================================================== 00:13:34.461 [2024-11-27T00:36:11.248Z] Total : 31730.18 123.95 0.00 0.00 2012.24 482.07 10132.87 00:13:34.461 ************************************ 00:13:34.461 END TEST xnvme_bdevperf 00:13:34.461 ************************************ 00:13:34.461 00:13:34.461 real 0m11.184s 00:13:34.461 user 0m3.021s 00:13:34.461 sys 0m6.629s 00:13:34.461 00:36:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.461 00:36:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 00:36:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:34.461 00:36:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:34.461 00:36:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.461 00:36:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 ************************************ 00:13:34.461 START TEST xnvme_fio_plugin 00:13:34.461 ************************************ 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:34.461 00:36:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.461 { 00:13:34.461 "subsystems": [ 00:13:34.461 { 00:13:34.461 "subsystem": "bdev", 00:13:34.461 "config": [ 00:13:34.461 { 00:13:34.461 "params": { 00:13:34.461 "io_mechanism": "libaio", 00:13:34.461 "conserve_cpu": true, 00:13:34.461 "filename": "/dev/nvme0n1", 00:13:34.461 "name": "xnvme_bdev" 00:13:34.461 }, 00:13:34.461 "method": "bdev_xnvme_create" 00:13:34.461 }, 00:13:34.461 { 00:13:34.461 "method": "bdev_wait_for_examine" 00:13:34.461 } 00:13:34.461 ] 00:13:34.461 } 00:13:34.461 ] 00:13:34.461 } 00:13:34.722 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:34.722 fio-3.35 00:13:34.722 Starting 1 thread 00:13:41.315 00:13:41.315 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81571: Wed Nov 27 00:36:16 2024 00:13:41.315 read: IOPS=32.7k, BW=128MiB/s (134MB/s)(639MiB/5001msec) 00:13:41.315 slat (usec): min=4, max=2056, avg=23.87, stdev=99.67 00:13:41.315 clat (usec): min=105, max=4658, avg=1317.21, stdev=515.86 00:13:41.315 lat (usec): min=216, max=4938, avg=1341.09, stdev=505.46 00:13:41.315 clat percentiles (usec): 00:13:41.315 | 1.00th=[ 273], 5.00th=[ 537], 10.00th=[ 685], 20.00th=[ 898], 00:13:41.315 | 30.00th=[ 1045], 40.00th=[ 1172], 50.00th=[ 1287], 60.00th=[ 1401], 00:13:41.315 | 70.00th=[ 1532], 80.00th=[ 1696], 90.00th=[ 1942], 95.00th=[ 2180], 00:13:41.315 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 3818], 99.95th=[ 3949], 00:13:41.315 | 99.99th=[ 4424] 00:13:41.315 bw ( KiB/s): min=120472, max=141136, per=100.00%, avg=131255.11, stdev=7269.80, samples=9 00:13:41.315 iops : min=30118, max=35284, avg=32813.78, stdev=1817.45, samples=9 00:13:41.315 lat (usec) : 250=0.72%, 500=3.39%, 750=8.42%, 1000=14.21% 00:13:41.315 lat (msec) : 2=64.66%, 4=8.57%, 10=0.04% 00:13:41.315 cpu : usr=34.74%, sys=56.70%, ctx=10, majf=0, minf=773 00:13:41.315 IO depths : 1=0.4%, 2=0.9%, 4=2.5%, 8=7.7%, 16=23.2%, 32=63.2%, >=64=2.1% 00:13:41.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.315 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:41.315 issued rwts: total=163549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.315 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:41.315 00:13:41.315 Run status group 0 (all jobs): 00:13:41.315 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=639MiB (670MB), run=5001-5001msec 00:13:41.315 ----------------------------------------------------- 00:13:41.315 Suppressions used: 00:13:41.315 count bytes template 00:13:41.315 1 11 /usr/src/fio/parse.c 00:13:41.315 1 8 libtcmalloc_minimal.so 00:13:41.315 1 904 libcrypto.so 00:13:41.315 ----------------------------------------------------- 00:13:41.315 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:41.315 00:36:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:41.315 { 00:13:41.315 "subsystems": [ 00:13:41.315 { 00:13:41.316 "subsystem": "bdev", 00:13:41.316 "config": [ 00:13:41.316 { 00:13:41.316 "params": { 00:13:41.316 "io_mechanism": "libaio", 00:13:41.316 "conserve_cpu": true, 00:13:41.316 "filename": "/dev/nvme0n1", 00:13:41.316 "name": "xnvme_bdev" 00:13:41.316 }, 00:13:41.316 "method": "bdev_xnvme_create" 00:13:41.316 }, 00:13:41.316 { 00:13:41.316 "method": "bdev_wait_for_examine" 00:13:41.316 } 00:13:41.316 ] 00:13:41.316 } 00:13:41.316 ] 00:13:41.316 } 00:13:41.316 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:41.316 fio-3.35 00:13:41.316 Starting 1 thread 00:13:46.608 00:13:46.608 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81656: Wed Nov 27 00:36:22 2024 00:13:46.608 write: IOPS=31.1k, BW=122MiB/s (127MB/s)(608MiB/5001msec); 0 zone resets 00:13:46.608 slat (usec): min=4, max=1973, avg=26.20, stdev=94.23 00:13:46.608 clat (usec): min=97, max=8436, avg=1337.25, stdev=583.32 00:13:46.608 lat (usec): min=184, max=8442, avg=1363.45, stdev=575.80 00:13:46.608 clat percentiles (usec): 00:13:46.608 | 1.00th=[ 262], 5.00th=[ 474], 10.00th=[ 644], 20.00th=[ 857], 00:13:46.608 | 30.00th=[ 1020], 40.00th=[ 1172], 50.00th=[ 1287], 60.00th=[ 1418], 00:13:46.608 | 70.00th=[ 1582], 80.00th=[ 1762], 90.00th=[ 2040], 95.00th=[ 2343], 00:13:46.608 | 99.00th=[ 3097], 99.50th=[ 3425], 99.90th=[ 4015], 99.95th=[ 4228], 00:13:46.608 | 99.99th=[ 6980] 00:13:46.608 bw ( KiB/s): min=117400, max=132600, per=100.00%, avg=124697.78, stdev=4563.87, samples=9 00:13:46.608 iops : min=29350, max=33150, avg=31174.44, stdev=1140.97, samples=9 00:13:46.608 lat (usec) : 100=0.01%, 250=0.84%, 500=4.81%, 750=9.00%, 1000=14.28% 00:13:46.608 lat (msec) : 2=59.90%, 4=11.07%, 10=0.10% 00:13:46.608 cpu : usr=30.84%, sys=59.12%, ctx=14, majf=0, minf=773 00:13:46.608 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.7%, 16=24.4%, 32=60.9%, >=64=2.0% 00:13:46.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.608 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:46.608 issued rwts: total=0,155605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.608 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.608 00:13:46.608 Run status group 0 (all jobs): 00:13:46.608 WRITE: bw=122MiB/s (127MB/s), 122MiB/s-122MiB/s (127MB/s-127MB/s), io=608MiB (637MB), run=5001-5001msec 00:13:46.608 ----------------------------------------------------- 00:13:46.608 Suppressions used: 00:13:46.608 count bytes template 00:13:46.608 1 11 /usr/src/fio/parse.c 00:13:46.608 1 8 libtcmalloc_minimal.so 00:13:46.608 1 904 libcrypto.so 00:13:46.608 ----------------------------------------------------- 00:13:46.608 00:13:46.608 ************************************ 00:13:46.608 END TEST xnvme_fio_plugin 00:13:46.608 ************************************ 00:13:46.608 00:13:46.608 real 0m12.169s 00:13:46.608 user 0m4.459s 00:13:46.608 sys 0m6.397s 00:13:46.608 00:36:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.608 00:36:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:46.871 00:36:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:46.871 00:36:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.871 00:36:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.871 00:36:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.871 ************************************ 00:13:46.871 START TEST xnvme_rpc 00:13:46.871 ************************************ 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81732 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81732 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81732 ']' 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.871 00:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.871 [2024-11-27 00:36:23.501024] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:46.871 [2024-11-27 00:36:23.501368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81732 ] 00:13:47.133 [2024-11-27 00:36:23.662462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.133 [2024-11-27 00:36:23.704460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 xnvme_bdev 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.965 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81732 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81732 ']' 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81732 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81732 00:13:47.966 killing process with pid 81732 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81732' 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81732 00:13:47.966 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81732 00:13:48.227 00:13:48.227 real 0m1.431s 00:13:48.227 user 0m1.401s 00:13:48.227 sys 0m0.524s 00:13:48.227 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.227 ************************************ 00:13:48.227 END TEST xnvme_rpc 00:13:48.227 ************************************ 00:13:48.227 00:36:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.227 00:36:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:48.227 00:36:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.227 00:36:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.228 00:36:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 ************************************ 00:13:48.228 START TEST xnvme_bdevperf 00:13:48.228 ************************************ 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:48.228 00:36:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:48.228 { 00:13:48.228 "subsystems": [ 00:13:48.228 { 00:13:48.228 "subsystem": "bdev", 00:13:48.228 "config": [ 00:13:48.228 { 00:13:48.228 "params": { 00:13:48.228 "io_mechanism": "io_uring", 00:13:48.228 "conserve_cpu": false, 00:13:48.228 "filename": "/dev/nvme0n1", 00:13:48.228 "name": "xnvme_bdev" 00:13:48.228 }, 00:13:48.228 "method": "bdev_xnvme_create" 00:13:48.228 }, 00:13:48.228 { 00:13:48.228 "method": "bdev_wait_for_examine" 00:13:48.228 } 00:13:48.228 ] 00:13:48.228 } 00:13:48.228 ] 00:13:48.228 } 00:13:48.228 [2024-11-27 00:36:24.974678] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:48.228 [2024-11-27 00:36:24.974819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81790 ] 00:13:48.489 [2024-11-27 00:36:25.136627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.489 [2024-11-27 00:36:25.166194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.489 Running I/O for 5 seconds... 00:13:50.837 33922.00 IOPS, 132.51 MiB/s [2024-11-27T00:36:28.567Z] 34767.50 IOPS, 135.81 MiB/s [2024-11-27T00:36:29.555Z] 35113.00 IOPS, 137.16 MiB/s [2024-11-27T00:36:30.546Z] 35672.75 IOPS, 139.35 MiB/s 00:13:53.759 Latency(us) 00:13:53.759 [2024-11-27T00:36:30.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.759 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:53.759 xnvme_bdev : 5.00 35636.66 139.21 0.00 0.00 1792.26 379.67 8116.38 00:13:53.759 [2024-11-27T00:36:30.546Z] =================================================================================================================== 00:13:53.759 [2024-11-27T00:36:30.546Z] Total : 35636.66 139.21 0.00 0.00 1792.26 379.67 8116.38 00:13:53.759 00:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.759 00:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:53.759 00:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:53.759 00:36:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:53.759 00:36:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.759 { 00:13:53.759 "subsystems": [ 00:13:53.759 { 00:13:53.759 "subsystem": "bdev", 00:13:53.759 "config": [ 00:13:53.759 { 00:13:53.759 "params": { 00:13:53.759 "io_mechanism": "io_uring", 00:13:53.759 "conserve_cpu": false, 00:13:53.759 "filename": "/dev/nvme0n1", 00:13:53.759 "name": "xnvme_bdev" 00:13:53.759 }, 00:13:53.759 "method": "bdev_xnvme_create" 00:13:53.759 }, 00:13:53.759 { 00:13:53.759 "method": "bdev_wait_for_examine" 00:13:53.759 } 00:13:53.759 ] 00:13:53.759 } 00:13:53.759 ] 00:13:53.759 } 00:13:53.759 [2024-11-27 00:36:30.531834] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:13:53.759 [2024-11-27 00:36:30.532025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81854 ] 00:13:54.021 [2024-11-27 00:36:30.699504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.021 [2024-11-27 00:36:30.727549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.283 Running I/O for 5 seconds... 00:13:56.166 37854.00 IOPS, 147.87 MiB/s [2024-11-27T00:36:33.894Z] 38226.50 IOPS, 149.32 MiB/s [2024-11-27T00:36:35.280Z] 38354.67 IOPS, 149.82 MiB/s [2024-11-27T00:36:35.851Z] 38149.50 IOPS, 149.02 MiB/s [2024-11-27T00:36:35.851Z] 37803.20 IOPS, 147.67 MiB/s 00:13:59.064 Latency(us) 00:13:59.064 [2024-11-27T00:36:35.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.064 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:59.065 xnvme_bdev : 5.00 37785.45 147.60 0.00 0.00 1690.11 335.56 5217.67 00:13:59.065 [2024-11-27T00:36:35.852Z] =================================================================================================================== 00:13:59.065 [2024-11-27T00:36:35.852Z] Total : 37785.45 147.60 0.00 0.00 1690.11 335.56 5217.67 00:13:59.326 00:13:59.326 real 0m11.121s 00:13:59.326 user 0m4.575s 00:13:59.326 sys 0m6.285s 00:13:59.326 00:36:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.326 ************************************ 00:13:59.326 END TEST xnvme_bdevperf 00:13:59.326 ************************************ 00:13:59.326 00:36:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.326 00:36:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:59.326 00:36:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.326 00:36:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.326 00:36:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.326 ************************************ 00:13:59.326 START TEST xnvme_fio_plugin 00:13:59.326 ************************************ 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:59.326 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:59.588 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:59.588 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:59.588 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:59.588 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:59.588 00:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:59.588 { 00:13:59.588 "subsystems": [ 00:13:59.588 { 00:13:59.588 "subsystem": "bdev", 00:13:59.588 "config": [ 00:13:59.588 { 00:13:59.588 "params": { 00:13:59.588 "io_mechanism": "io_uring", 00:13:59.588 "conserve_cpu": false, 00:13:59.588 "filename": "/dev/nvme0n1", 00:13:59.588 "name": "xnvme_bdev" 00:13:59.588 }, 00:13:59.588 "method": "bdev_xnvme_create" 00:13:59.588 }, 00:13:59.588 { 00:13:59.588 "method": "bdev_wait_for_examine" 00:13:59.588 } 00:13:59.588 ] 00:13:59.588 } 00:13:59.588 ] 00:13:59.588 } 00:13:59.588 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:59.588 fio-3.35 00:13:59.588 Starting 1 thread 00:14:06.175 00:14:06.175 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81962: Wed Nov 27 00:36:41 2024 00:14:06.175 read: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5002msec) 00:14:06.175 slat (usec): min=2, max=349, avg= 3.43, stdev= 2.85 00:14:06.175 clat (usec): min=1050, max=4271, avg=1753.36, stdev=252.64 00:14:06.175 lat (usec): min=1053, max=4275, avg=1756.80, stdev=252.94 00:14:06.175 clat percentiles (usec): 00:14:06.175 | 1.00th=[ 1303], 5.00th=[ 1401], 10.00th=[ 1467], 20.00th=[ 1549], 00:14:06.175 | 30.00th=[ 1614], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1778], 00:14:06.175 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2089], 95.00th=[ 2212], 00:14:06.175 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 3163], 99.95th=[ 3720], 00:14:06.175 | 99.99th=[ 3884] 00:14:06.175 bw ( KiB/s): min=131328, max=140032, per=100.00%, avg=135338.67, stdev=2511.54, samples=9 00:14:06.175 iops : min=32832, max=35008, avg=33834.67, stdev=627.89, samples=9 00:14:06.175 lat (msec) : 2=85.10%, 4=14.90%, 10=0.01% 00:14:06.175 cpu : usr=30.91%, sys=67.33%, ctx=59, majf=0, minf=771 00:14:06.175 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:06.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:06.175 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:06.175 issued rwts: total=168948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:06.175 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:06.175 00:14:06.175 Run status group 0 (all jobs): 00:14:06.175 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5002-5002msec 00:14:06.175 ----------------------------------------------------- 00:14:06.175 Suppressions used: 00:14:06.175 count bytes template 00:14:06.175 1 11 /usr/src/fio/parse.c 00:14:06.175 1 8 libtcmalloc_minimal.so 00:14:06.175 1 904 libcrypto.so 00:14:06.175 ----------------------------------------------------- 00:14:06.175 00:14:06.175 00:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:06.175 00:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:06.176 00:36:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:06.176 { 00:14:06.176 "subsystems": [ 00:14:06.176 { 00:14:06.176 "subsystem": "bdev", 00:14:06.176 "config": [ 00:14:06.176 { 00:14:06.176 "params": { 00:14:06.176 "io_mechanism": "io_uring", 00:14:06.176 "conserve_cpu": false, 00:14:06.176 "filename": "/dev/nvme0n1", 00:14:06.176 "name": "xnvme_bdev" 00:14:06.176 }, 00:14:06.176 "method": "bdev_xnvme_create" 00:14:06.176 }, 00:14:06.176 { 00:14:06.176 "method": "bdev_wait_for_examine" 00:14:06.176 } 00:14:06.176 ] 00:14:06.176 } 00:14:06.176 ] 00:14:06.176 } 00:14:06.176 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:06.176 fio-3.35 00:14:06.176 Starting 1 thread 00:14:11.472 00:14:11.472 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82043: Wed Nov 27 00:36:47 2024 00:14:11.472 write: IOPS=33.8k, BW=132MiB/s (139MB/s)(661MiB/5001msec); 0 zone resets 00:14:11.472 slat (usec): min=2, max=289, avg= 4.02, stdev= 2.04 00:14:11.472 clat (usec): min=246, max=8368, avg=1731.75, stdev=274.50 00:14:11.472 lat (usec): min=250, max=8382, avg=1735.77, stdev=274.80 00:14:11.472 clat percentiles (usec): 00:14:11.472 | 1.00th=[ 1270], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1516], 00:14:11.472 | 30.00th=[ 1582], 40.00th=[ 1647], 50.00th=[ 1696], 60.00th=[ 1762], 00:14:11.472 | 70.00th=[ 1827], 80.00th=[ 1909], 90.00th=[ 2057], 95.00th=[ 2212], 00:14:11.472 | 99.00th=[ 2540], 99.50th=[ 2671], 99.90th=[ 3294], 99.95th=[ 3654], 00:14:11.472 | 99.99th=[ 6521] 00:14:11.472 bw ( KiB/s): min=130912, max=142128, per=100.00%, avg=135536.89, stdev=3963.10, samples=9 00:14:11.472 iops : min=32728, max=35532, avg=33884.22, stdev=990.77, samples=9 00:14:11.472 lat (usec) : 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.04% 00:14:11.472 lat (msec) : 2=86.61%, 4=13.26%, 10=0.03% 00:14:11.472 cpu : usr=32.66%, sys=66.08%, ctx=12, majf=0, minf=771 00:14:11.472 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:14:11.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.472 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:11.472 issued rwts: total=0,169266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.472 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.472 00:14:11.472 Run status group 0 (all jobs): 00:14:11.472 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=661MiB (693MB), run=5001-5001msec 00:14:11.472 ----------------------------------------------------- 00:14:11.472 Suppressions used: 00:14:11.472 count bytes template 00:14:11.472 1 11 /usr/src/fio/parse.c 00:14:11.472 1 8 libtcmalloc_minimal.so 00:14:11.472 1 904 libcrypto.so 00:14:11.472 ----------------------------------------------------- 00:14:11.472 00:14:11.472 00:14:11.472 real 0m12.108s 00:14:11.472 user 0m4.354s 00:14:11.472 sys 0m7.281s 00:14:11.472 00:36:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.472 00:36:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:11.472 ************************************ 00:14:11.472 END TEST xnvme_fio_plugin 00:14:11.472 ************************************ 00:14:11.733 00:36:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:11.733 00:36:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:11.733 00:36:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:11.733 00:36:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:11.733 00:36:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.733 00:36:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.733 00:36:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.733 ************************************ 00:14:11.733 START TEST xnvme_rpc 00:14:11.733 ************************************ 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82118 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82118 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82118 ']' 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.733 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.734 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.734 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.734 00:36:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.734 [2024-11-27 00:36:48.364383] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:11.734 [2024-11-27 00:36:48.364530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82118 ] 00:14:11.995 [2024-11-27 00:36:48.525678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.995 [2024-11-27 00:36:48.555962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.567 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.567 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:12.567 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:12.567 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.568 xnvme_bdev 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.568 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82118 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82118 ']' 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82118 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82118 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.829 killing process with pid 82118 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82118' 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82118 00:14:12.829 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82118 00:14:13.090 00:14:13.090 real 0m1.479s 00:14:13.090 user 0m1.553s 00:14:13.090 sys 0m0.439s 00:14:13.090 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.090 00:36:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.090 ************************************ 00:14:13.090 END TEST xnvme_rpc 00:14:13.090 ************************************ 00:14:13.090 00:36:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:13.090 00:36:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:13.090 00:36:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.090 00:36:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.090 ************************************ 00:14:13.090 START TEST xnvme_bdevperf 00:14:13.090 ************************************ 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.090 00:36:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.090 { 00:14:13.090 "subsystems": [ 00:14:13.090 { 00:14:13.090 "subsystem": "bdev", 00:14:13.090 "config": [ 00:14:13.090 { 00:14:13.090 "params": { 00:14:13.090 "io_mechanism": "io_uring", 00:14:13.090 "conserve_cpu": true, 00:14:13.090 "filename": "/dev/nvme0n1", 00:14:13.090 "name": "xnvme_bdev" 00:14:13.090 }, 00:14:13.090 "method": "bdev_xnvme_create" 00:14:13.090 }, 00:14:13.090 { 00:14:13.090 "method": "bdev_wait_for_examine" 00:14:13.090 } 00:14:13.090 ] 00:14:13.090 } 00:14:13.090 ] 00:14:13.090 } 00:14:13.351 [2024-11-27 00:36:49.910660] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:13.351 [2024-11-27 00:36:49.910801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82181 ] 00:14:13.351 [2024-11-27 00:36:50.075847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.351 [2024-11-27 00:36:50.106236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.614 Running I/O for 5 seconds... 00:14:15.497 36147.00 IOPS, 141.20 MiB/s [2024-11-27T00:36:53.226Z] 36130.50 IOPS, 141.13 MiB/s [2024-11-27T00:36:54.614Z] 36530.00 IOPS, 142.70 MiB/s [2024-11-27T00:36:55.560Z] 36113.25 IOPS, 141.07 MiB/s [2024-11-27T00:36:55.560Z] 36162.20 IOPS, 141.26 MiB/s 00:14:18.773 Latency(us) 00:14:18.773 [2024-11-27T00:36:55.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.773 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:18.773 xnvme_bdev : 5.00 36159.98 141.25 0.00 0.00 1766.25 806.60 4814.38 00:14:18.773 [2024-11-27T00:36:55.560Z] =================================================================================================================== 00:14:18.773 [2024-11-27T00:36:55.560Z] Total : 36159.98 141.25 0.00 0.00 1766.25 806.60 4814.38 00:14:18.773 00:36:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.773 00:36:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:18.773 00:36:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.773 00:36:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.773 00:36:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.773 { 00:14:18.773 "subsystems": [ 00:14:18.773 { 00:14:18.773 "subsystem": "bdev", 00:14:18.773 "config": [ 00:14:18.773 { 00:14:18.773 "params": { 00:14:18.773 "io_mechanism": "io_uring", 00:14:18.773 "conserve_cpu": true, 00:14:18.773 "filename": "/dev/nvme0n1", 00:14:18.773 "name": "xnvme_bdev" 00:14:18.773 }, 00:14:18.773 "method": "bdev_xnvme_create" 00:14:18.773 }, 00:14:18.773 { 00:14:18.773 "method": "bdev_wait_for_examine" 00:14:18.773 } 00:14:18.773 ] 00:14:18.773 } 00:14:18.773 ] 00:14:18.773 } 00:14:18.773 [2024-11-27 00:36:55.480158] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:18.773 [2024-11-27 00:36:55.480317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82245 ] 00:14:19.034 [2024-11-27 00:36:55.641874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.034 [2024-11-27 00:36:55.672792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.034 Running I/O for 5 seconds... 00:14:21.367 35553.00 IOPS, 138.88 MiB/s [2024-11-27T00:36:59.097Z] 35148.00 IOPS, 137.30 MiB/s [2024-11-27T00:37:00.042Z] 35352.00 IOPS, 138.09 MiB/s [2024-11-27T00:37:00.988Z] 35369.50 IOPS, 138.16 MiB/s [2024-11-27T00:37:00.988Z] 35420.00 IOPS, 138.36 MiB/s 00:14:24.201 Latency(us) 00:14:24.201 [2024-11-27T00:37:00.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.201 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:24.201 xnvme_bdev : 5.00 35401.90 138.29 0.00 0.00 1803.86 346.58 10233.70 00:14:24.201 [2024-11-27T00:37:00.988Z] =================================================================================================================== 00:14:24.201 [2024-11-27T00:37:00.988Z] Total : 35401.90 138.29 0.00 0.00 1803.86 346.58 10233.70 00:14:24.463 00:14:24.463 real 0m11.223s 00:14:24.463 user 0m7.212s 00:14:24.463 sys 0m3.487s 00:14:24.463 00:37:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.463 ************************************ 00:14:24.463 END TEST xnvme_bdevperf 00:14:24.463 ************************************ 00:14:24.463 00:37:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:24.463 00:37:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:24.463 00:37:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:24.463 00:37:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.463 00:37:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.463 ************************************ 00:14:24.463 START TEST xnvme_fio_plugin 00:14:24.463 ************************************ 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:24.463 00:37:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:24.463 { 00:14:24.463 "subsystems": [ 00:14:24.463 { 00:14:24.463 "subsystem": "bdev", 00:14:24.463 "config": [ 00:14:24.463 { 00:14:24.463 "params": { 00:14:24.463 "io_mechanism": "io_uring", 00:14:24.463 "conserve_cpu": true, 00:14:24.463 "filename": "/dev/nvme0n1", 00:14:24.463 "name": "xnvme_bdev" 00:14:24.463 }, 00:14:24.463 "method": "bdev_xnvme_create" 00:14:24.463 }, 00:14:24.463 { 00:14:24.463 "method": "bdev_wait_for_examine" 00:14:24.463 } 00:14:24.463 ] 00:14:24.463 } 00:14:24.463 ] 00:14:24.463 } 00:14:24.725 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:24.725 fio-3.35 00:14:24.725 Starting 1 thread 00:14:30.105 00:14:30.105 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82354: Wed Nov 27 00:37:06 2024 00:14:30.105 read: IOPS=34.9k, BW=136MiB/s (143MB/s)(681MiB/5001msec) 00:14:30.105 slat (nsec): min=2783, max=96405, avg=3410.98, stdev=1683.96 00:14:30.105 clat (usec): min=1067, max=8497, avg=1696.37, stdev=246.78 00:14:30.105 lat (usec): min=1070, max=8500, avg=1699.78, stdev=247.15 00:14:30.105 clat percentiles (usec): 00:14:30.105 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1500], 00:14:30.105 | 30.00th=[ 1565], 40.00th=[ 1614], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:30.105 | 70.00th=[ 1795], 80.00th=[ 1876], 90.00th=[ 2008], 95.00th=[ 2147], 00:14:30.105 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2933], 99.95th=[ 3294], 00:14:30.105 | 99.99th=[ 3687] 00:14:30.105 bw ( KiB/s): min=129277, max=152576, per=99.83%, avg=139292.11, stdev=6467.36, samples=9 00:14:30.105 iops : min=32319, max=38144, avg=34823.00, stdev=1616.89, samples=9 00:14:30.105 lat (msec) : 2=89.48%, 4=10.51%, 10=0.01% 00:14:30.105 cpu : usr=58.32%, sys=38.02%, ctx=14, majf=0, minf=771 00:14:30.105 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:30.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.105 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:30.105 issued rwts: total=174442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.105 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:30.105 00:14:30.105 Run status group 0 (all jobs): 00:14:30.105 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=681MiB (715MB), run=5001-5001msec 00:14:30.678 ----------------------------------------------------- 00:14:30.678 Suppressions used: 00:14:30.678 count bytes template 00:14:30.678 1 11 /usr/src/fio/parse.c 00:14:30.678 1 8 libtcmalloc_minimal.so 00:14:30.678 1 904 libcrypto.so 00:14:30.678 ----------------------------------------------------- 00:14:30.678 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:30.678 00:37:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.678 { 00:14:30.678 "subsystems": [ 00:14:30.678 { 00:14:30.678 "subsystem": "bdev", 00:14:30.678 "config": [ 00:14:30.678 { 00:14:30.678 "params": { 00:14:30.678 "io_mechanism": "io_uring", 00:14:30.678 "conserve_cpu": true, 00:14:30.678 "filename": "/dev/nvme0n1", 00:14:30.678 "name": "xnvme_bdev" 00:14:30.678 }, 00:14:30.678 "method": "bdev_xnvme_create" 00:14:30.678 }, 00:14:30.678 { 00:14:30.678 "method": "bdev_wait_for_examine" 00:14:30.678 } 00:14:30.678 ] 00:14:30.678 } 00:14:30.678 ] 00:14:30.678 } 00:14:30.678 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:30.678 fio-3.35 00:14:30.678 Starting 1 thread 00:14:37.280 00:14:37.280 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82435: Wed Nov 27 00:37:12 2024 00:14:37.280 write: IOPS=35.1k, BW=137MiB/s (144MB/s)(685MiB/5000msec); 0 zone resets 00:14:37.280 slat (nsec): min=2840, max=67433, avg=3719.50, stdev=1771.57 00:14:37.280 clat (usec): min=931, max=6188, avg=1675.26, stdev=253.03 00:14:37.280 lat (usec): min=934, max=6192, avg=1678.98, stdev=253.39 00:14:37.280 clat percentiles (usec): 00:14:37.280 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:37.280 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1713], 00:14:37.280 | 70.00th=[ 1778], 80.00th=[ 1860], 90.00th=[ 1991], 95.00th=[ 2147], 00:14:37.280 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2835], 99.95th=[ 3261], 00:14:37.280 | 99.99th=[ 3851] 00:14:37.280 bw ( KiB/s): min=133080, max=152032, per=99.65%, avg=139873.78, stdev=6334.44, samples=9 00:14:37.280 iops : min=33270, max=38008, avg=34968.44, stdev=1583.61, samples=9 00:14:37.280 lat (usec) : 1000=0.01% 00:14:37.280 lat (msec) : 2=90.29%, 4=9.69%, 10=0.01% 00:14:37.280 cpu : usr=62.52%, sys=33.78%, ctx=11, majf=0, minf=771 00:14:37.280 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:37.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.280 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:37.280 issued rwts: total=0,175458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.280 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.280 00:14:37.280 Run status group 0 (all jobs): 00:14:37.280 WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=685MiB (719MB), run=5000-5000msec 00:14:37.280 ----------------------------------------------------- 00:14:37.280 Suppressions used: 00:14:37.280 count bytes template 00:14:37.281 1 11 /usr/src/fio/parse.c 00:14:37.281 1 8 libtcmalloc_minimal.so 00:14:37.281 1 904 libcrypto.so 00:14:37.281 ----------------------------------------------------- 00:14:37.281 00:14:37.281 00:14:37.281 real 0m12.088s 00:14:37.281 user 0m7.219s 00:14:37.281 sys 0m4.187s 00:14:37.281 00:37:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.281 ************************************ 00:14:37.281 END TEST xnvme_fio_plugin 00:14:37.281 ************************************ 00:14:37.281 00:37:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:37.281 00:37:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:37.281 00:37:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:37.281 00:37:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.281 00:37:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.281 ************************************ 00:14:37.281 START TEST xnvme_rpc 00:14:37.281 ************************************ 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82521 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82521 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82521 ']' 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.281 00:37:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:37.281 [2024-11-27 00:37:13.376050] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:37.281 [2024-11-27 00:37:13.376190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82521 ] 00:14:37.281 [2024-11-27 00:37:13.535839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.281 [2024-11-27 00:37:13.565254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 xnvme_bdev 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.542 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.802 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:37.802 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82521 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82521 ']' 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82521 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82521 00:14:37.803 killing process with pid 82521 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82521' 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82521 00:14:37.803 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82521 00:14:38.064 00:14:38.064 real 0m1.403s 00:14:38.064 user 0m1.458s 00:14:38.064 sys 0m0.427s 00:14:38.064 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.064 00:37:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.064 ************************************ 00:14:38.064 END TEST xnvme_rpc 00:14:38.064 ************************************ 00:14:38.064 00:37:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:38.064 00:37:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:38.064 00:37:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.064 00:37:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:38.064 ************************************ 00:14:38.064 START TEST xnvme_bdevperf 00:14:38.064 ************************************ 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.064 00:37:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.064 { 00:14:38.064 "subsystems": [ 00:14:38.064 { 00:14:38.064 "subsystem": "bdev", 00:14:38.064 "config": [ 00:14:38.064 { 00:14:38.064 "params": { 00:14:38.064 "io_mechanism": "io_uring_cmd", 00:14:38.064 "conserve_cpu": false, 00:14:38.064 "filename": "/dev/ng0n1", 00:14:38.064 "name": "xnvme_bdev" 00:14:38.064 }, 00:14:38.064 "method": "bdev_xnvme_create" 00:14:38.064 }, 00:14:38.064 { 00:14:38.064 "method": "bdev_wait_for_examine" 00:14:38.064 } 00:14:38.064 ] 00:14:38.064 } 00:14:38.064 ] 00:14:38.064 } 00:14:38.064 [2024-11-27 00:37:14.820152] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:38.064 [2024-11-27 00:37:14.820838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82577 ] 00:14:38.326 [2024-11-27 00:37:14.984599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.326 [2024-11-27 00:37:15.013639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.326 Running I/O for 5 seconds... 00:14:40.654 34643.00 IOPS, 135.32 MiB/s [2024-11-27T00:37:18.383Z] 34129.00 IOPS, 133.32 MiB/s [2024-11-27T00:37:19.327Z] 35213.00 IOPS, 137.55 MiB/s [2024-11-27T00:37:20.270Z] 35101.00 IOPS, 137.11 MiB/s [2024-11-27T00:37:20.270Z] 35681.40 IOPS, 139.38 MiB/s 00:14:43.483 Latency(us) 00:14:43.483 [2024-11-27T00:37:20.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.483 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:43.483 xnvme_bdev : 5.00 35677.65 139.37 0.00 0.00 1790.12 642.76 9326.28 00:14:43.483 [2024-11-27T00:37:20.270Z] =================================================================================================================== 00:14:43.483 [2024-11-27T00:37:20.270Z] Total : 35677.65 139.37 0.00 0.00 1790.12 642.76 9326.28 00:14:43.745 00:37:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.745 00:37:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:43.745 00:37:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.745 00:37:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.745 00:37:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.745 { 00:14:43.745 "subsystems": [ 00:14:43.745 { 00:14:43.745 "subsystem": "bdev", 00:14:43.745 "config": [ 00:14:43.745 { 00:14:43.745 "params": { 00:14:43.745 "io_mechanism": "io_uring_cmd", 00:14:43.745 "conserve_cpu": false, 00:14:43.745 "filename": "/dev/ng0n1", 00:14:43.745 "name": "xnvme_bdev" 00:14:43.745 }, 00:14:43.745 "method": "bdev_xnvme_create" 00:14:43.745 }, 00:14:43.745 { 00:14:43.745 "method": "bdev_wait_for_examine" 00:14:43.745 } 00:14:43.745 ] 00:14:43.745 } 00:14:43.745 ] 00:14:43.745 } 00:14:43.745 [2024-11-27 00:37:20.364363] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:43.745 [2024-11-27 00:37:20.364510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82640 ] 00:14:43.745 [2024-11-27 00:37:20.528295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.007 [2024-11-27 00:37:20.557356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.007 Running I/O for 5 seconds... 00:14:45.894 39161.00 IOPS, 152.97 MiB/s [2024-11-27T00:37:24.067Z] 37485.50 IOPS, 146.43 MiB/s [2024-11-27T00:37:25.009Z] 36896.67 IOPS, 144.13 MiB/s [2024-11-27T00:37:25.950Z] 37278.75 IOPS, 145.62 MiB/s [2024-11-27T00:37:25.950Z] 38081.60 IOPS, 148.76 MiB/s 00:14:49.163 Latency(us) 00:14:49.163 [2024-11-27T00:37:25.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.163 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:49.163 xnvme_bdev : 5.01 38044.10 148.61 0.00 0.00 1678.53 335.56 6553.60 00:14:49.163 [2024-11-27T00:37:25.950Z] =================================================================================================================== 00:14:49.163 [2024-11-27T00:37:25.950Z] Total : 38044.10 148.61 0.00 0.00 1678.53 335.56 6553.60 00:14:49.163 00:37:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.163 00:37:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:49.163 00:37:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.163 00:37:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.163 00:37:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.163 { 00:14:49.163 "subsystems": [ 00:14:49.163 { 00:14:49.163 "subsystem": "bdev", 00:14:49.163 "config": [ 00:14:49.163 { 00:14:49.163 "params": { 00:14:49.163 "io_mechanism": "io_uring_cmd", 00:14:49.163 "conserve_cpu": false, 00:14:49.163 "filename": "/dev/ng0n1", 00:14:49.163 "name": "xnvme_bdev" 00:14:49.163 }, 00:14:49.163 "method": "bdev_xnvme_create" 00:14:49.163 }, 00:14:49.163 { 00:14:49.163 "method": "bdev_wait_for_examine" 00:14:49.163 } 00:14:49.163 ] 00:14:49.163 } 00:14:49.163 ] 00:14:49.163 } 00:14:49.163 [2024-11-27 00:37:25.924119] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:49.163 [2024-11-27 00:37:25.924267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82711 ] 00:14:49.424 [2024-11-27 00:37:26.088160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.424 [2024-11-27 00:37:26.117743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.686 Running I/O for 5 seconds... 00:14:51.564 73536.00 IOPS, 287.25 MiB/s [2024-11-27T00:37:29.287Z] 72896.00 IOPS, 284.75 MiB/s [2024-11-27T00:37:30.661Z] 74688.00 IOPS, 291.75 MiB/s [2024-11-27T00:37:31.227Z] 77600.00 IOPS, 303.12 MiB/s [2024-11-27T00:37:31.488Z] 80780.80 IOPS, 315.55 MiB/s 00:14:54.701 Latency(us) 00:14:54.701 [2024-11-27T00:37:31.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.701 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:54.701 xnvme_bdev : 5.00 80763.86 315.48 0.00 0.00 789.15 472.62 3478.45 00:14:54.701 [2024-11-27T00:37:31.488Z] =================================================================================================================== 00:14:54.701 [2024-11-27T00:37:31.488Z] Total : 80763.86 315.48 0.00 0.00 789.15 472.62 3478.45 00:14:54.701 00:37:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.701 00:37:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:54.701 00:37:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.701 00:37:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.701 00:37:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.701 { 00:14:54.701 "subsystems": [ 00:14:54.701 { 00:14:54.701 "subsystem": "bdev", 00:14:54.701 "config": [ 00:14:54.701 { 00:14:54.701 "params": { 00:14:54.701 "io_mechanism": "io_uring_cmd", 00:14:54.701 "conserve_cpu": false, 00:14:54.701 "filename": "/dev/ng0n1", 00:14:54.701 "name": "xnvme_bdev" 00:14:54.701 }, 00:14:54.701 "method": "bdev_xnvme_create" 00:14:54.701 }, 00:14:54.701 { 00:14:54.701 "method": "bdev_wait_for_examine" 00:14:54.701 } 00:14:54.701 ] 00:14:54.701 } 00:14:54.701 ] 00:14:54.701 } 00:14:54.701 [2024-11-27 00:37:31.436384] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:14:54.701 [2024-11-27 00:37:31.436493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82774 ] 00:14:54.960 [2024-11-27 00:37:31.590843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.960 [2024-11-27 00:37:31.617608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.960 Running I/O for 5 seconds... 00:14:57.283 37904.00 IOPS, 148.06 MiB/s [2024-11-27T00:37:35.016Z] 23626.50 IOPS, 92.29 MiB/s [2024-11-27T00:37:35.985Z] 15804.33 IOPS, 61.74 MiB/s [2024-11-27T00:37:36.932Z] 11899.25 IOPS, 46.48 MiB/s [2024-11-27T00:37:37.193Z] 9565.20 IOPS, 37.36 MiB/s 00:15:00.406 Latency(us) 00:15:00.406 [2024-11-27T00:37:37.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.406 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:00.406 xnvme_bdev : 5.27 9089.02 35.50 0.00 0.00 6876.33 61.83 709805.29 00:15:00.406 [2024-11-27T00:37:37.193Z] =================================================================================================================== 00:15:00.406 [2024-11-27T00:37:37.193Z] Total : 9089.02 35.50 0.00 0.00 6876.33 61.83 709805.29 00:15:00.406 00:15:00.406 real 0m22.411s 00:15:00.406 user 0m11.532s 00:15:00.406 sys 0m10.414s 00:15:00.406 ************************************ 00:15:00.406 END TEST xnvme_bdevperf 00:15:00.406 ************************************ 00:15:00.406 00:37:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.406 00:37:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:00.668 00:37:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:00.668 00:37:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.668 00:37:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.668 00:37:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.668 ************************************ 00:15:00.668 START TEST xnvme_fio_plugin 00:15:00.668 ************************************ 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:00.668 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:00.669 00:37:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:00.669 { 00:15:00.669 "subsystems": [ 00:15:00.669 { 00:15:00.669 "subsystem": "bdev", 00:15:00.669 "config": [ 00:15:00.669 { 00:15:00.669 "params": { 00:15:00.669 "io_mechanism": "io_uring_cmd", 00:15:00.669 "conserve_cpu": false, 00:15:00.669 "filename": "/dev/ng0n1", 00:15:00.669 "name": "xnvme_bdev" 00:15:00.669 }, 00:15:00.669 "method": "bdev_xnvme_create" 00:15:00.669 }, 00:15:00.669 { 00:15:00.669 "method": "bdev_wait_for_examine" 00:15:00.669 } 00:15:00.669 ] 00:15:00.669 } 00:15:00.669 ] 00:15:00.669 } 00:15:00.669 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:00.669 fio-3.35 00:15:00.669 Starting 1 thread 00:15:07.268 00:15:07.268 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82876: Wed Nov 27 00:37:42 2024 00:15:07.268 read: IOPS=43.0k, BW=168MiB/s (176MB/s)(841MiB/5002msec) 00:15:07.268 slat (nsec): min=2806, max=99714, avg=3021.37, stdev=1111.54 00:15:07.268 clat (usec): min=839, max=12265, avg=1368.13, stdev=284.94 00:15:07.268 lat (usec): min=842, max=12268, avg=1371.15, stdev=285.05 00:15:07.268 clat percentiles (usec): 00:15:07.268 | 1.00th=[ 963], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1156], 00:15:07.268 | 30.00th=[ 1205], 40.00th=[ 1237], 50.00th=[ 1287], 60.00th=[ 1352], 00:15:07.268 | 70.00th=[ 1434], 80.00th=[ 1565], 90.00th=[ 1762], 95.00th=[ 1926], 00:15:07.268 | 99.00th=[ 2245], 99.50th=[ 2409], 99.90th=[ 2835], 99.95th=[ 3097], 00:15:07.268 | 99.99th=[ 3556] 00:15:07.268 bw ( KiB/s): min=158208, max=180736, per=100.00%, avg=172665.78, stdev=6449.12, samples=9 00:15:07.268 iops : min=39552, max=45184, avg=43166.44, stdev=1612.28, samples=9 00:15:07.268 lat (usec) : 1000=2.17% 00:15:07.268 lat (msec) : 2=94.25%, 4=3.58%, 10=0.01%, 20=0.01% 00:15:07.268 cpu : usr=39.89%, sys=59.17%, ctx=12, majf=0, minf=771 00:15:07.268 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:15:07.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.268 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:07.268 issued rwts: total=215310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.269 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:07.269 00:15:07.269 Run status group 0 (all jobs): 00:15:07.269 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=841MiB (882MB), run=5002-5002msec 00:15:07.269 ----------------------------------------------------- 00:15:07.269 Suppressions used: 00:15:07.269 count bytes template 00:15:07.269 1 11 /usr/src/fio/parse.c 00:15:07.269 1 8 libtcmalloc_minimal.so 00:15:07.269 1 904 libcrypto.so 00:15:07.269 ----------------------------------------------------- 00:15:07.269 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.269 00:37:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.269 { 00:15:07.269 "subsystems": [ 00:15:07.269 { 00:15:07.269 "subsystem": "bdev", 00:15:07.269 "config": [ 00:15:07.269 { 00:15:07.269 "params": { 00:15:07.269 "io_mechanism": "io_uring_cmd", 00:15:07.269 "conserve_cpu": false, 00:15:07.269 "filename": "/dev/ng0n1", 00:15:07.269 "name": "xnvme_bdev" 00:15:07.269 }, 00:15:07.269 "method": "bdev_xnvme_create" 00:15:07.269 }, 00:15:07.269 { 00:15:07.269 "method": "bdev_wait_for_examine" 00:15:07.269 } 00:15:07.269 ] 00:15:07.269 } 00:15:07.269 ] 00:15:07.269 } 00:15:07.269 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:07.269 fio-3.35 00:15:07.269 Starting 1 thread 00:15:12.575 00:15:12.575 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82961: Wed Nov 27 00:37:48 2024 00:15:12.575 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec); 0 zone resets 00:15:12.575 slat (usec): min=2, max=272, avg= 3.88, stdev= 2.41 00:15:12.575 clat (usec): min=67, max=20977, avg=1799.54, stdev=2446.26 00:15:12.575 lat (usec): min=71, max=20981, avg=1803.42, stdev=2446.42 00:15:12.575 clat percentiles (usec): 00:15:12.575 | 1.00th=[ 334], 5.00th=[ 725], 10.00th=[ 971], 20.00th=[ 1090], 00:15:12.575 | 30.00th=[ 1156], 40.00th=[ 1205], 50.00th=[ 1270], 60.00th=[ 1336], 00:15:12.575 | 70.00th=[ 1434], 80.00th=[ 1582], 90.00th=[ 1827], 95.00th=[ 2474], 00:15:12.575 | 99.00th=[14353], 99.50th=[15401], 99.90th=[17171], 99.95th=[17695], 00:15:12.575 | 99.99th=[18744] 00:15:12.575 bw ( KiB/s): min=49136, max=180368, per=96.88%, avg=128739.56, stdev=60510.56, samples=9 00:15:12.575 iops : min=12284, max=45092, avg=32184.89, stdev=15127.64, samples=9 00:15:12.575 lat (usec) : 100=0.01%, 250=0.52%, 500=1.91%, 750=2.99%, 1000=6.27% 00:15:12.575 lat (msec) : 2=81.51%, 4=2.24%, 10=0.50%, 20=4.05%, 50=0.01% 00:15:12.575 cpu : usr=38.98%, sys=59.48%, ctx=69, majf=0, minf=771 00:15:12.575 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.1%, 16=20.7%, 32=56.9%, >=64=3.7% 00:15:12.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.575 complete : 0=0.0%, 4=97.7%, 8=0.4%, 16=0.3%, 32=0.3%, 64=1.3%, >=64=0.0% 00:15:12.575 issued rwts: total=0,166140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.575 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:12.575 00:15:12.575 Run status group 0 (all jobs): 00:15:12.575 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (681MB), run=5001-5001msec 00:15:12.575 ----------------------------------------------------- 00:15:12.575 Suppressions used: 00:15:12.575 count bytes template 00:15:12.575 1 11 /usr/src/fio/parse.c 00:15:12.575 1 8 libtcmalloc_minimal.so 00:15:12.575 1 904 libcrypto.so 00:15:12.575 ----------------------------------------------------- 00:15:12.575 00:15:12.575 ************************************ 00:15:12.575 END TEST xnvme_fio_plugin 00:15:12.575 ************************************ 00:15:12.575 00:15:12.575 real 0m12.070s 00:15:12.575 user 0m5.128s 00:15:12.575 sys 0m6.508s 00:15:12.575 00:37:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.575 00:37:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:12.841 00:37:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:12.841 00:37:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:12.841 00:37:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:12.841 00:37:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:12.842 00:37:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.842 00:37:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.842 00:37:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.842 ************************************ 00:15:12.842 START TEST xnvme_rpc 00:15:12.842 ************************************ 00:15:12.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83041 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83041 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83041 ']' 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.842 00:37:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.842 [2024-11-27 00:37:49.471274] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:12.842 [2024-11-27 00:37:49.471945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83041 ] 00:15:13.103 [2024-11-27 00:37:49.637346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.103 [2024-11-27 00:37:49.666258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.676 xnvme_bdev 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:13.676 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83041 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83041 ']' 00:15:13.938 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83041 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83041 00:15:13.939 killing process with pid 83041 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83041' 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83041 00:15:13.939 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83041 00:15:14.200 00:15:14.200 real 0m1.437s 00:15:14.200 user 0m1.468s 00:15:14.200 sys 0m0.463s 00:15:14.200 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.200 00:37:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.200 ************************************ 00:15:14.200 END TEST xnvme_rpc 00:15:14.200 ************************************ 00:15:14.200 00:37:50 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:14.200 00:37:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.200 00:37:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.200 00:37:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.200 ************************************ 00:15:14.200 START TEST xnvme_bdevperf 00:15:14.200 ************************************ 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.200 00:37:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.200 { 00:15:14.200 "subsystems": [ 00:15:14.200 { 00:15:14.200 "subsystem": "bdev", 00:15:14.200 "config": [ 00:15:14.200 { 00:15:14.200 "params": { 00:15:14.200 "io_mechanism": "io_uring_cmd", 00:15:14.200 "conserve_cpu": true, 00:15:14.200 "filename": "/dev/ng0n1", 00:15:14.200 "name": "xnvme_bdev" 00:15:14.200 }, 00:15:14.200 "method": "bdev_xnvme_create" 00:15:14.200 }, 00:15:14.200 { 00:15:14.200 "method": "bdev_wait_for_examine" 00:15:14.200 } 00:15:14.200 ] 00:15:14.200 } 00:15:14.200 ] 00:15:14.200 } 00:15:14.200 [2024-11-27 00:37:50.964123] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:14.200 [2024-11-27 00:37:50.965069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83093 ] 00:15:14.462 [2024-11-27 00:37:51.128562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.462 [2024-11-27 00:37:51.158941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.724 Running I/O for 5 seconds... 00:15:16.616 38656.00 IOPS, 151.00 MiB/s [2024-11-27T00:37:54.350Z] 42048.00 IOPS, 164.25 MiB/s [2024-11-27T00:37:55.296Z] 40597.33 IOPS, 158.58 MiB/s [2024-11-27T00:37:56.682Z] 39744.00 IOPS, 155.25 MiB/s [2024-11-27T00:37:56.682Z] 40281.60 IOPS, 157.35 MiB/s 00:15:19.896 Latency(us) 00:15:19.896 [2024-11-27T00:37:56.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.896 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:19.896 xnvme_bdev : 5.00 40273.34 157.32 0.00 0.00 1585.14 844.41 5091.64 00:15:19.896 [2024-11-27T00:37:56.683Z] =================================================================================================================== 00:15:19.896 [2024-11-27T00:37:56.683Z] Total : 40273.34 157.32 0.00 0.00 1585.14 844.41 5091.64 00:15:19.896 00:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.896 00:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:19.896 00:37:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.896 00:37:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.896 00:37:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:19.896 { 00:15:19.896 "subsystems": [ 00:15:19.896 { 00:15:19.896 "subsystem": "bdev", 00:15:19.896 "config": [ 00:15:19.896 { 00:15:19.896 "params": { 00:15:19.896 "io_mechanism": "io_uring_cmd", 00:15:19.896 "conserve_cpu": true, 00:15:19.896 "filename": "/dev/ng0n1", 00:15:19.896 "name": "xnvme_bdev" 00:15:19.896 }, 00:15:19.896 "method": "bdev_xnvme_create" 00:15:19.896 }, 00:15:19.896 { 00:15:19.896 "method": "bdev_wait_for_examine" 00:15:19.896 } 00:15:19.896 ] 00:15:19.896 } 00:15:19.896 ] 00:15:19.896 } 00:15:19.896 [2024-11-27 00:37:56.532835] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:19.896 [2024-11-27 00:37:56.533016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83162 ] 00:15:20.157 [2024-11-27 00:37:56.696323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.157 [2024-11-27 00:37:56.724784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.157 Running I/O for 5 seconds... 00:15:22.494 42354.00 IOPS, 165.45 MiB/s [2024-11-27T00:37:59.857Z] 42432.00 IOPS, 165.75 MiB/s [2024-11-27T00:38:01.241Z] 41123.67 IOPS, 160.64 MiB/s [2024-11-27T00:38:02.185Z] 40403.50 IOPS, 157.83 MiB/s 00:15:25.398 Latency(us) 00:15:25.398 [2024-11-27T00:38:02.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.398 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:25.398 xnvme_bdev : 5.00 39787.68 155.42 0.00 0.00 1604.07 401.72 8922.98 00:15:25.398 [2024-11-27T00:38:02.185Z] =================================================================================================================== 00:15:25.398 [2024-11-27T00:38:02.185Z] Total : 39787.68 155.42 0.00 0.00 1604.07 401.72 8922.98 00:15:25.398 00:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:25.398 00:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:25.398 00:38:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:25.398 00:38:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.398 00:38:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.398 { 00:15:25.398 "subsystems": [ 00:15:25.398 { 00:15:25.398 "subsystem": "bdev", 00:15:25.398 "config": [ 00:15:25.398 { 00:15:25.398 "params": { 00:15:25.398 "io_mechanism": "io_uring_cmd", 00:15:25.398 "conserve_cpu": true, 00:15:25.398 "filename": "/dev/ng0n1", 00:15:25.398 "name": "xnvme_bdev" 00:15:25.398 }, 00:15:25.398 "method": "bdev_xnvme_create" 00:15:25.398 }, 00:15:25.398 { 00:15:25.398 "method": "bdev_wait_for_examine" 00:15:25.398 } 00:15:25.398 ] 00:15:25.398 } 00:15:25.398 ] 00:15:25.398 } 00:15:25.398 [2024-11-27 00:38:02.094336] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:25.398 [2024-11-27 00:38:02.094466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83231 ] 00:15:25.659 [2024-11-27 00:38:02.256615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.659 [2024-11-27 00:38:02.285510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.659 Running I/O for 5 seconds... 00:15:27.618 75264.00 IOPS, 294.00 MiB/s [2024-11-27T00:38:05.791Z] 77696.00 IOPS, 303.50 MiB/s [2024-11-27T00:38:06.734Z] 76394.67 IOPS, 298.42 MiB/s [2024-11-27T00:38:07.678Z] 78448.00 IOPS, 306.44 MiB/s 00:15:30.891 Latency(us) 00:15:30.891 [2024-11-27T00:38:07.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.891 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:30.891 xnvme_bdev : 5.00 79134.11 309.12 0.00 0.00 805.26 329.26 2886.10 00:15:30.891 [2024-11-27T00:38:07.678Z] =================================================================================================================== 00:15:30.891 [2024-11-27T00:38:07.678Z] Total : 79134.11 309.12 0.00 0.00 805.26 329.26 2886.10 00:15:30.891 00:38:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.891 00:38:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:30.891 00:38:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:30.891 00:38:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:30.891 00:38:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.891 { 00:15:30.891 "subsystems": [ 00:15:30.891 { 00:15:30.891 "subsystem": "bdev", 00:15:30.891 "config": [ 00:15:30.891 { 00:15:30.891 "params": { 00:15:30.891 "io_mechanism": "io_uring_cmd", 00:15:30.891 "conserve_cpu": true, 00:15:30.891 "filename": "/dev/ng0n1", 00:15:30.891 "name": "xnvme_bdev" 00:15:30.891 }, 00:15:30.891 "method": "bdev_xnvme_create" 00:15:30.891 }, 00:15:30.891 { 00:15:30.891 "method": "bdev_wait_for_examine" 00:15:30.891 } 00:15:30.891 ] 00:15:30.891 } 00:15:30.891 ] 00:15:30.891 } 00:15:30.891 [2024-11-27 00:38:07.643712] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:30.891 [2024-11-27 00:38:07.643878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83296 ] 00:15:31.153 [2024-11-27 00:38:07.799879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.153 [2024-11-27 00:38:07.828794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.153 Running I/O for 5 seconds... 00:15:33.495 36807.00 IOPS, 143.78 MiB/s [2024-11-27T00:38:11.267Z] 33934.00 IOPS, 132.55 MiB/s [2024-11-27T00:38:12.208Z] 31578.67 IOPS, 123.35 MiB/s [2024-11-27T00:38:13.149Z] 30995.75 IOPS, 121.08 MiB/s [2024-11-27T00:38:13.149Z] 30050.60 IOPS, 117.39 MiB/s 00:15:36.362 Latency(us) 00:15:36.362 [2024-11-27T00:38:13.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.362 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:36.362 xnvme_bdev : 5.01 30023.50 117.28 0.00 0.00 2125.53 52.38 24802.86 00:15:36.362 [2024-11-27T00:38:13.149Z] =================================================================================================================== 00:15:36.362 [2024-11-27T00:38:13.149Z] Total : 30023.50 117.28 0.00 0.00 2125.53 52.38 24802.86 00:15:36.362 ************************************ 00:15:36.362 END TEST xnvme_bdevperf 00:15:36.362 ************************************ 00:15:36.362 00:15:36.362 real 0m22.252s 00:15:36.362 user 0m13.561s 00:15:36.362 sys 0m6.696s 00:15:36.362 00:38:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.362 00:38:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.623 00:38:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:36.623 00:38:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.623 00:38:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.623 00:38:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.623 ************************************ 00:15:36.623 START TEST xnvme_fio_plugin 00:15:36.623 ************************************ 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.623 00:38:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.623 { 00:15:36.623 "subsystems": [ 00:15:36.623 { 00:15:36.623 "subsystem": "bdev", 00:15:36.623 "config": [ 00:15:36.623 { 00:15:36.623 "params": { 00:15:36.623 "io_mechanism": "io_uring_cmd", 00:15:36.623 "conserve_cpu": true, 00:15:36.623 "filename": "/dev/ng0n1", 00:15:36.623 "name": "xnvme_bdev" 00:15:36.623 }, 00:15:36.623 "method": "bdev_xnvme_create" 00:15:36.623 }, 00:15:36.623 { 00:15:36.623 "method": "bdev_wait_for_examine" 00:15:36.623 } 00:15:36.623 ] 00:15:36.623 } 00:15:36.623 ] 00:15:36.623 } 00:15:36.884 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:36.884 fio-3.35 00:15:36.884 Starting 1 thread 00:15:42.172 00:15:42.172 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83397: Wed Nov 27 00:38:18 2024 00:15:42.172 read: IOPS=42.2k, BW=165MiB/s (173MB/s)(824MiB/5001msec) 00:15:42.172 slat (usec): min=2, max=141, avg= 3.23, stdev= 1.49 00:15:42.172 clat (usec): min=865, max=3544, avg=1387.63, stdev=245.21 00:15:42.172 lat (usec): min=868, max=3571, avg=1390.86, stdev=245.52 00:15:42.172 clat percentiles (usec): 00:15:42.172 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:42.172 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1418], 00:15:42.172 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1713], 95.00th=[ 1844], 00:15:42.172 | 99.00th=[ 2089], 99.50th=[ 2180], 99.90th=[ 2474], 99.95th=[ 2900], 00:15:42.172 | 99.99th=[ 3392] 00:15:42.172 bw ( KiB/s): min=145920, max=186880, per=100.00%, avg=169756.44, stdev=13046.24, samples=9 00:15:42.172 iops : min=36480, max=46720, avg=42439.11, stdev=3261.56, samples=9 00:15:42.172 lat (usec) : 1000=1.13% 00:15:42.172 lat (msec) : 2=97.00%, 4=1.87% 00:15:42.172 cpu : usr=65.82%, sys=31.22%, ctx=10, majf=0, minf=771 00:15:42.172 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:42.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.172 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:42.172 issued rwts: total=210944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:42.172 00:15:42.172 Run status group 0 (all jobs): 00:15:42.172 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=824MiB (864MB), run=5001-5001msec 00:15:42.433 ----------------------------------------------------- 00:15:42.433 Suppressions used: 00:15:42.433 count bytes template 00:15:42.433 1 11 /usr/src/fio/parse.c 00:15:42.433 1 8 libtcmalloc_minimal.so 00:15:42.433 1 904 libcrypto.so 00:15:42.433 ----------------------------------------------------- 00:15:42.433 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.693 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.694 00:38:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.694 { 00:15:42.694 "subsystems": [ 00:15:42.694 { 00:15:42.694 "subsystem": "bdev", 00:15:42.694 "config": [ 00:15:42.694 { 00:15:42.694 "params": { 00:15:42.694 "io_mechanism": "io_uring_cmd", 00:15:42.694 "conserve_cpu": true, 00:15:42.694 "filename": "/dev/ng0n1", 00:15:42.694 "name": "xnvme_bdev" 00:15:42.694 }, 00:15:42.694 "method": "bdev_xnvme_create" 00:15:42.694 }, 00:15:42.694 { 00:15:42.694 "method": "bdev_wait_for_examine" 00:15:42.694 } 00:15:42.694 ] 00:15:42.694 } 00:15:42.694 ] 00:15:42.694 } 00:15:42.694 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:42.694 fio-3.35 00:15:42.694 Starting 1 thread 00:15:49.277 00:15:49.277 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83482: Wed Nov 27 00:38:24 2024 00:15:49.277 write: IOPS=42.5k, BW=166MiB/s (174MB/s)(830MiB/5001msec); 0 zone resets 00:15:49.277 slat (usec): min=2, max=150, avg= 3.89, stdev= 2.13 00:15:49.277 clat (usec): min=121, max=25000, avg=1353.91, stdev=538.09 00:15:49.277 lat (usec): min=125, max=25012, avg=1357.80, stdev=538.52 00:15:49.277 clat percentiles (usec): 00:15:49.277 | 1.00th=[ 971], 5.00th=[ 1045], 10.00th=[ 1074], 20.00th=[ 1139], 00:15:49.277 | 30.00th=[ 1188], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1336], 00:15:49.277 | 70.00th=[ 1418], 80.00th=[ 1532], 90.00th=[ 1696], 95.00th=[ 1844], 00:15:49.277 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 3687], 99.95th=[15401], 00:15:49.277 | 99.99th=[22938] 00:15:49.277 bw ( KiB/s): min=148248, max=187112, per=100.00%, avg=170118.22, stdev=14125.85, samples=9 00:15:49.277 iops : min=37062, max=46778, avg=42529.56, stdev=3531.46, samples=9 00:15:49.277 lat (usec) : 250=0.01%, 500=0.03%, 750=0.04%, 1000=1.90% 00:15:49.277 lat (msec) : 2=95.60%, 4=2.34%, 10=0.02%, 20=0.04%, 50=0.03% 00:15:49.277 cpu : usr=66.74%, sys=27.10%, ctx=8, majf=0, minf=771 00:15:49.277 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.7% 00:15:49.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.277 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:49.277 issued rwts: total=0,212551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.277 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:49.277 00:15:49.277 Run status group 0 (all jobs): 00:15:49.277 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=830MiB (871MB), run=5001-5001msec 00:15:49.277 ----------------------------------------------------- 00:15:49.277 Suppressions used: 00:15:49.277 count bytes template 00:15:49.277 1 11 /usr/src/fio/parse.c 00:15:49.277 1 8 libtcmalloc_minimal.so 00:15:49.277 1 904 libcrypto.so 00:15:49.277 ----------------------------------------------------- 00:15:49.277 00:15:49.277 ************************************ 00:15:49.277 END TEST xnvme_fio_plugin 00:15:49.277 ************************************ 00:15:49.277 00:15:49.277 real 0m12.032s 00:15:49.277 user 0m7.752s 00:15:49.277 sys 0m3.501s 00:15:49.277 00:38:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.277 00:38:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:49.277 Process with pid 83041 is not found 00:15:49.277 00:38:25 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 83041 00:15:49.277 00:38:25 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83041 ']' 00:15:49.277 00:38:25 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 83041 00:15:49.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83041) - No such process 00:15:49.277 00:38:25 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 83041 is not found' 00:15:49.277 ************************************ 00:15:49.277 END TEST nvme_xnvme 00:15:49.277 ************************************ 00:15:49.277 00:15:49.277 real 2m58.769s 00:15:49.277 user 1m26.483s 00:15:49.277 sys 1m18.168s 00:15:49.277 00:38:25 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.277 00:38:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.277 00:38:25 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.277 00:38:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.277 00:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.277 00:38:25 -- common/autotest_common.sh@10 -- # set +x 00:15:49.277 ************************************ 00:15:49.277 START TEST blockdev_xnvme 00:15:49.277 ************************************ 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.277 * Looking for test storage... 00:15:49.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.277 00:38:25 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.277 --rc genhtml_branch_coverage=1 00:15:49.277 --rc genhtml_function_coverage=1 00:15:49.277 --rc genhtml_legend=1 00:15:49.277 --rc geninfo_all_blocks=1 00:15:49.277 --rc geninfo_unexecuted_blocks=1 00:15:49.277 00:15:49.277 ' 00:15:49.277 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.277 --rc genhtml_branch_coverage=1 00:15:49.277 --rc genhtml_function_coverage=1 00:15:49.277 --rc genhtml_legend=1 00:15:49.277 --rc geninfo_all_blocks=1 00:15:49.277 --rc geninfo_unexecuted_blocks=1 00:15:49.277 00:15:49.277 ' 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.278 --rc genhtml_branch_coverage=1 00:15:49.278 --rc genhtml_function_coverage=1 00:15:49.278 --rc genhtml_legend=1 00:15:49.278 --rc geninfo_all_blocks=1 00:15:49.278 --rc geninfo_unexecuted_blocks=1 00:15:49.278 00:15:49.278 ' 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.278 --rc genhtml_branch_coverage=1 00:15:49.278 --rc genhtml_function_coverage=1 00:15:49.278 --rc genhtml_legend=1 00:15:49.278 --rc geninfo_all_blocks=1 00:15:49.278 --rc geninfo_unexecuted_blocks=1 00:15:49.278 00:15:49.278 ' 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83611 00:15:49.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83611 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83611 ']' 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.278 00:38:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.278 00:38:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:49.278 [2024-11-27 00:38:25.631806] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:49.278 [2024-11-27 00:38:25.631991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83611 ] 00:15:49.278 [2024-11-27 00:38:25.797762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.278 [2024-11-27 00:38:25.827275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.850 00:38:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.850 00:38:26 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:49.850 00:38:26 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:49.850 00:38:26 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:49.850 00:38:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:49.850 00:38:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:49.850 00:38:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.684 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:50.946 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:50.946 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:50.946 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:50.946 nvme0n1 00:15:50.946 nvme0n2 00:15:50.946 nvme0n3 00:15:50.946 nvme1n1 00:15:50.946 nvme2n1 00:15:50.946 nvme3n1 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.946 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.946 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:50.947 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.947 00:38:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65d54555-d064-4c66-b94f-c4e817200d70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65d54555-d064-4c66-b94f-c4e817200d70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "bc0d5d20-06ed-4d83-aa09-fe890177416d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc0d5d20-06ed-4d83-aa09-fe890177416d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "570ac810-7562-4b42-a7e0-fa35bb166e20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "570ac810-7562-4b42-a7e0-fa35bb166e20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cd3edc68-5597-446f-be1a-f25034e3e134"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cd3edc68-5597-446f-be1a-f25034e3e134",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "52ab9e5a-cf21-4429-9125-7e1cd6d518d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "52ab9e5a-cf21-4429-9125-7e1cd6d518d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cc320f1e-b690-4264-8528-a8f047c24481"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cc320f1e-b690-4264-8528-a8f047c24481",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:51.208 00:38:27 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83611 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83611 ']' 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83611 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83611 00:15:51.208 killing process with pid 83611 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83611' 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83611 00:15:51.208 00:38:27 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83611 00:15:51.470 00:38:28 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:51.470 00:38:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:51.470 00:38:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:51.470 00:38:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.470 00:38:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 ************************************ 00:15:51.470 START TEST bdev_hello_world 00:15:51.470 ************************************ 00:15:51.470 00:38:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:51.470 [2024-11-27 00:38:28.169604] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:51.470 [2024-11-27 00:38:28.169756] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83873 ] 00:15:51.732 [2024-11-27 00:38:28.334019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.732 [2024-11-27 00:38:28.363228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.993 [2024-11-27 00:38:28.586729] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:51.993 [2024-11-27 00:38:28.587023] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:51.993 [2024-11-27 00:38:28.587067] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:51.993 [2024-11-27 00:38:28.589338] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:51.993 [2024-11-27 00:38:28.589987] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:51.993 [2024-11-27 00:38:28.590026] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:51.993 [2024-11-27 00:38:28.591092] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:51.993 00:15:51.993 [2024-11-27 00:38:28.591179] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:51.993 ************************************ 00:15:51.993 END TEST bdev_hello_world 00:15:51.993 00:15:51.993 real 0m0.671s 00:15:51.993 user 0m0.329s 00:15:51.993 sys 0m0.198s 00:15:51.993 00:38:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.993 00:38:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:51.993 ************************************ 00:15:52.254 00:38:28 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:52.254 00:38:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.254 00:38:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.254 00:38:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.254 ************************************ 00:15:52.254 START TEST bdev_bounds 00:15:52.254 ************************************ 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83904 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.254 Process bdevio pid: 83904 00:15:52.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83904' 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83904 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83904 ']' 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.254 00:38:28 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:52.254 [2024-11-27 00:38:28.921415] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:52.254 [2024-11-27 00:38:28.922457] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83904 ] 00:15:52.515 [2024-11-27 00:38:29.088033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.515 [2024-11-27 00:38:29.119627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.515 [2024-11-27 00:38:29.119897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.515 [2024-11-27 00:38:29.119970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.088 00:38:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.088 00:38:29 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:53.088 00:38:29 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:53.088 I/O targets: 00:15:53.088 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.088 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.088 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.088 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:53.088 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:53.088 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:53.088 00:15:53.088 00:15:53.088 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.088 http://cunit.sourceforge.net/ 00:15:53.088 00:15:53.088 00:15:53.088 Suite: bdevio tests on: nvme3n1 00:15:53.088 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.349 Test: blockdev write read 8 blocks ...passed 00:15:53.349 Test: blockdev write read size > 128k ...passed 00:15:53.349 Test: blockdev write read invalid size ...passed 00:15:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.349 Test: blockdev write read max offset ...passed 00:15:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.349 Test: blockdev writev readv 8 blocks ...passed 00:15:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.349 Test: blockdev writev readv block ...passed 00:15:53.349 Test: blockdev writev readv size > 128k ...passed 00:15:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.349 Test: blockdev comparev and writev ...passed 00:15:53.349 Test: blockdev nvme passthru rw ...passed 00:15:53.349 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.349 Test: blockdev nvme admin passthru ...passed 00:15:53.349 Test: blockdev copy ...passed 00:15:53.349 Suite: bdevio tests on: nvme2n1 00:15:53.349 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.349 Test: blockdev write read 8 blocks ...passed 00:15:53.349 Test: blockdev write read size > 128k ...passed 00:15:53.349 Test: blockdev write read invalid size ...passed 00:15:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.349 Test: blockdev write read max offset ...passed 00:15:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.349 Test: blockdev writev readv 8 blocks ...passed 00:15:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.349 Test: blockdev writev readv block ...passed 00:15:53.349 Test: blockdev writev readv size > 128k ...passed 00:15:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.349 Test: blockdev comparev and writev ...passed 00:15:53.349 Test: blockdev nvme passthru rw ...passed 00:15:53.349 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.349 Test: blockdev nvme admin passthru ...passed 00:15:53.349 Test: blockdev copy ...passed 00:15:53.349 Suite: bdevio tests on: nvme1n1 00:15:53.349 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.349 Test: blockdev write read 8 blocks ...passed 00:15:53.349 Test: blockdev write read size > 128k ...passed 00:15:53.349 Test: blockdev write read invalid size ...passed 00:15:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.349 Test: blockdev write read max offset ...passed 00:15:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.349 Test: blockdev writev readv 8 blocks ...passed 00:15:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.349 Test: blockdev writev readv block ...passed 00:15:53.349 Test: blockdev writev readv size > 128k ...passed 00:15:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.349 Test: blockdev comparev and writev ...passed 00:15:53.349 Test: blockdev nvme passthru rw ...passed 00:15:53.349 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.349 Test: blockdev nvme admin passthru ...passed 00:15:53.349 Test: blockdev copy ...passed 00:15:53.349 Suite: bdevio tests on: nvme0n3 00:15:53.349 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.349 Test: blockdev write read 8 blocks ...passed 00:15:53.349 Test: blockdev write read size > 128k ...passed 00:15:53.349 Test: blockdev write read invalid size ...passed 00:15:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.349 Test: blockdev write read max offset ...passed 00:15:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.349 Test: blockdev writev readv 8 blocks ...passed 00:15:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.349 Test: blockdev writev readv block ...passed 00:15:53.349 Test: blockdev writev readv size > 128k ...passed 00:15:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.349 Test: blockdev comparev and writev ...passed 00:15:53.349 Test: blockdev nvme passthru rw ...passed 00:15:53.349 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.349 Test: blockdev nvme admin passthru ...passed 00:15:53.349 Test: blockdev copy ...passed 00:15:53.349 Suite: bdevio tests on: nvme0n2 00:15:53.349 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.349 Test: blockdev write read 8 blocks ...passed 00:15:53.349 Test: blockdev write read size > 128k ...passed 00:15:53.349 Test: blockdev write read invalid size ...passed 00:15:53.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.349 Test: blockdev write read max offset ...passed 00:15:53.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.349 Test: blockdev writev readv 8 blocks ...passed 00:15:53.349 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.349 Test: blockdev writev readv block ...passed 00:15:53.349 Test: blockdev writev readv size > 128k ...passed 00:15:53.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.349 Test: blockdev comparev and writev ...passed 00:15:53.349 Test: blockdev nvme passthru rw ...passed 00:15:53.349 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.349 Test: blockdev nvme admin passthru ...passed 00:15:53.349 Test: blockdev copy ...passed 00:15:53.349 Suite: bdevio tests on: nvme0n1 00:15:53.349 Test: blockdev write read block ...passed 00:15:53.349 Test: blockdev write zeroes read block ...passed 00:15:53.349 Test: blockdev write zeroes read no split ...passed 00:15:53.349 Test: blockdev write zeroes read split ...passed 00:15:53.349 Test: blockdev write zeroes read split partial ...passed 00:15:53.349 Test: blockdev reset ...passed 00:15:53.611 Test: blockdev write read 8 blocks ...passed 00:15:53.611 Test: blockdev write read size > 128k ...passed 00:15:53.611 Test: blockdev write read invalid size ...passed 00:15:53.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.611 Test: blockdev write read max offset ...passed 00:15:53.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.611 Test: blockdev writev readv 8 blocks ...passed 00:15:53.611 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.611 Test: blockdev writev readv block ...passed 00:15:53.611 Test: blockdev writev readv size > 128k ...passed 00:15:53.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.611 Test: blockdev comparev and writev ...passed 00:15:53.611 Test: blockdev nvme passthru rw ...passed 00:15:53.611 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.611 Test: blockdev nvme admin passthru ...passed 00:15:53.611 Test: blockdev copy ...passed 00:15:53.611 00:15:53.611 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.611 suites 6 6 n/a 0 0 00:15:53.611 tests 138 138 138 0 0 00:15:53.611 asserts 780 780 780 0 n/a 00:15:53.611 00:15:53.611 Elapsed time = 0.623 seconds 00:15:53.611 0 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83904 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83904 ']' 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83904 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83904 00:15:53.611 killing process with pid 83904 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83904' 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83904 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83904 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:53.611 00:15:53.611 real 0m1.544s 00:15:53.611 user 0m3.707s 00:15:53.611 sys 0m0.357s 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.611 ************************************ 00:15:53.611 END TEST bdev_bounds 00:15:53.611 ************************************ 00:15:53.611 00:38:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:53.872 00:38:30 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:53.872 00:38:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.872 00:38:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.872 00:38:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.872 ************************************ 00:15:53.872 START TEST bdev_nbd 00:15:53.872 ************************************ 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:53.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83959 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83959 /var/tmp/spdk-nbd.sock 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83959 ']' 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:53.872 00:38:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:53.872 [2024-11-27 00:38:30.547877] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:15:53.872 [2024-11-27 00:38:30.548619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.133 [2024-11-27 00:38:30.710375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.133 [2024-11-27 00:38:30.740426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.704 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.705 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.967 1+0 records in 00:15:54.967 1+0 records out 00:15:54.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134008 s, 3.1 MB/s 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:54.967 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:55.228 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.229 1+0 records in 00:15:55.229 1+0 records out 00:15:55.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099876 s, 4.1 MB/s 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:55.229 00:38:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.490 1+0 records in 00:15:55.490 1+0 records out 00:15:55.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000876967 s, 4.7 MB/s 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:55.490 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:55.491 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.752 1+0 records in 00:15:55.752 1+0 records out 00:15:55.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124021 s, 3.3 MB/s 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:55.752 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.013 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.014 1+0 records in 00:15:56.014 1+0 records out 00:15:56.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00220942 s, 1.9 MB/s 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.014 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.275 1+0 records in 00:15:56.275 1+0 records out 00:15:56.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874751 s, 4.7 MB/s 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.275 00:38:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd0", 00:15:56.538 "bdev_name": "nvme0n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd1", 00:15:56.538 "bdev_name": "nvme0n2" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd2", 00:15:56.538 "bdev_name": "nvme0n3" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd3", 00:15:56.538 "bdev_name": "nvme1n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd4", 00:15:56.538 "bdev_name": "nvme2n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd5", 00:15:56.538 "bdev_name": "nvme3n1" 00:15:56.538 } 00:15:56.538 ]' 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd0", 00:15:56.538 "bdev_name": "nvme0n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd1", 00:15:56.538 "bdev_name": "nvme0n2" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd2", 00:15:56.538 "bdev_name": "nvme0n3" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd3", 00:15:56.538 "bdev_name": "nvme1n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd4", 00:15:56.538 "bdev_name": "nvme2n1" 00:15:56.538 }, 00:15:56.538 { 00:15:56.538 "nbd_device": "/dev/nbd5", 00:15:56.538 "bdev_name": "nvme3n1" 00:15:56.538 } 00:15:56.538 ]' 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.538 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.800 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.062 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.323 00:38:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.583 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.844 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:58.104 00:38:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:58.364 /dev/nbd0 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.364 1+0 records in 00:15:58.364 1+0 records out 00:15:58.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986153 s, 4.2 MB/s 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:58.364 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:58.624 /dev/nbd1 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.624 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.624 1+0 records in 00:15:58.624 1+0 records out 00:15:58.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000972235 s, 4.2 MB/s 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:58.625 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:58.886 /dev/nbd10 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.886 1+0 records in 00:15:58.886 1+0 records out 00:15:58.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802838 s, 5.1 MB/s 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:58.886 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:59.146 /dev/nbd11 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.146 1+0 records in 00:15:59.146 1+0 records out 00:15:59.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939786 s, 4.4 MB/s 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.146 00:38:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:59.407 /dev/nbd12 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.407 1+0 records in 00:15:59.407 1+0 records out 00:15:59.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134205 s, 3.1 MB/s 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.407 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:59.667 /dev/nbd13 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.667 1+0 records in 00:15:59.667 1+0 records out 00:15:59.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116981 s, 3.5 MB/s 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:59.667 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.668 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd0", 00:15:59.928 "bdev_name": "nvme0n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd1", 00:15:59.928 "bdev_name": "nvme0n2" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd10", 00:15:59.928 "bdev_name": "nvme0n3" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd11", 00:15:59.928 "bdev_name": "nvme1n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd12", 00:15:59.928 "bdev_name": "nvme2n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd13", 00:15:59.928 "bdev_name": "nvme3n1" 00:15:59.928 } 00:15:59.928 ]' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd0", 00:15:59.928 "bdev_name": "nvme0n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd1", 00:15:59.928 "bdev_name": "nvme0n2" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd10", 00:15:59.928 "bdev_name": "nvme0n3" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd11", 00:15:59.928 "bdev_name": "nvme1n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd12", 00:15:59.928 "bdev_name": "nvme2n1" 00:15:59.928 }, 00:15:59.928 { 00:15:59.928 "nbd_device": "/dev/nbd13", 00:15:59.928 "bdev_name": "nvme3n1" 00:15:59.928 } 00:15:59.928 ]' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:59.928 /dev/nbd1 00:15:59.928 /dev/nbd10 00:15:59.928 /dev/nbd11 00:15:59.928 /dev/nbd12 00:15:59.928 /dev/nbd13' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:59.928 /dev/nbd1 00:15:59.928 /dev/nbd10 00:15:59.928 /dev/nbd11 00:15:59.928 /dev/nbd12 00:15:59.928 /dev/nbd13' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:59.928 256+0 records in 00:15:59.928 256+0 records out 00:15:59.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061088 s, 172 MB/s 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:59.928 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:00.188 256+0 records in 00:16:00.188 256+0 records out 00:16:00.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242798 s, 4.3 MB/s 00:16:00.189 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.189 00:38:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:00.450 256+0 records in 00:16:00.450 256+0 records out 00:16:00.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247299 s, 4.2 MB/s 00:16:00.450 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.450 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:00.712 256+0 records in 00:16:00.712 256+0 records out 00:16:00.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187069 s, 5.6 MB/s 00:16:00.712 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.712 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:00.712 256+0 records in 00:16:00.712 256+0 records out 00:16:00.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240008 s, 4.4 MB/s 00:16:00.712 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:00.712 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:01.284 256+0 records in 00:16:01.284 256+0 records out 00:16:01.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.301971 s, 3.5 MB/s 00:16:01.284 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:01.284 256+0 records in 00:16:01.284 256+0 records out 00:16:01.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22238 s, 4.7 MB/s 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.284 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.546 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.807 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.068 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.328 00:38:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.595 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:02.858 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:03.119 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:03.119 malloc_lvol_verify 00:16:03.380 00:38:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:03.380 0b057a5e-8bfd-4eed-856c-29490e62b2f8 00:16:03.380 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:03.641 668376cd-a5af-4d21-9c44-e50ea8517b48 00:16:03.642 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:03.903 /dev/nbd0 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:03.903 mke2fs 1.47.0 (5-Feb-2023) 00:16:03.903 Discarding device blocks: 0/4096 done 00:16:03.903 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:03.903 00:16:03.903 Allocating group tables: 0/1 done 00:16:03.903 Writing inode tables: 0/1 done 00:16:03.903 Creating journal (1024 blocks): done 00:16:03.903 Writing superblocks and filesystem accounting information: 0/1 done 00:16:03.903 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.903 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83959 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83959 ']' 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83959 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83959 00:16:04.209 killing process with pid 83959 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83959' 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83959 00:16:04.209 00:38:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83959 00:16:04.479 ************************************ 00:16:04.479 END TEST bdev_nbd 00:16:04.479 ************************************ 00:16:04.479 00:38:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:04.479 00:16:04.479 real 0m10.610s 00:16:04.479 user 0m14.332s 00:16:04.479 sys 0m3.947s 00:16:04.479 00:38:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.479 00:38:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 00:38:41 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:04.479 00:38:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:04.479 00:38:41 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:04.479 00:38:41 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:04.479 00:38:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.479 00:38:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.479 00:38:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.479 ************************************ 00:16:04.479 START TEST bdev_fio 00:16:04.479 ************************************ 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:04.479 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.479 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:04.480 ************************************ 00:16:04.480 START TEST bdev_fio_rw_verify 00:16:04.480 ************************************ 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:04.480 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.741 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.741 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.741 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:04.741 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.742 00:38:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.742 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:04.742 fio-3.35 00:16:04.742 Starting 6 threads 00:16:16.982 00:16:16.982 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=84367: Wed Nov 27 00:38:52 2024 00:16:16.982 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(542MiB/10001msec) 00:16:16.982 slat (usec): min=2, max=2269, avg= 7.37, stdev=17.93 00:16:16.982 clat (usec): min=97, max=385697, avg=1441.08, stdev=3015.56 00:16:16.982 lat (usec): min=100, max=385715, avg=1448.45, stdev=3015.71 00:16:16.982 clat percentiles (usec): 00:16:16.982 | 50.000th=[ 1319], 99.000th=[ 3752], 99.900th=[ 5407], 00:16:16.982 | 99.990th=[ 8586], 99.999th=[383779] 00:16:16.982 write: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(549MiB/10001msec); 0 zone resets 00:16:16.982 slat (usec): min=13, max=4148, avg=41.71, stdev=136.85 00:16:16.982 clat (usec): min=97, max=7427, avg=1658.82, stdev=802.16 00:16:16.982 lat (usec): min=113, max=7445, avg=1700.53, stdev=813.98 00:16:16.982 clat percentiles (usec): 00:16:16.982 | 50.000th=[ 1532], 99.000th=[ 4113], 99.900th=[ 5473], 99.990th=[ 6783], 00:16:16.982 | 99.999th=[ 7439] 00:16:16.982 bw ( KiB/s): min=41005, max=74847, per=100.00%, avg=56419.11, stdev=1517.91, samples=114 00:16:16.982 iops : min=10249, max=18711, avg=14104.26, stdev=379.52, samples=114 00:16:16.982 lat (usec) : 100=0.01%, 250=1.25%, 500=5.31%, 750=7.69%, 1000=10.94% 00:16:16.982 lat (msec) : 2=51.63%, 4=22.18%, 10=1.00%, 500=0.01% 00:16:16.982 cpu : usr=44.41%, sys=32.09%, ctx=5232, majf=0, minf=14264 00:16:16.982 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.982 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.982 issued rwts: total=138726,140538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:16.982 00:16:16.982 Run status group 0 (all jobs): 00:16:16.982 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=542MiB (568MB), run=10001-10001msec 00:16:16.982 WRITE: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=549MiB (576MB), run=10001-10001msec 00:16:16.982 ----------------------------------------------------- 00:16:16.982 Suppressions used: 00:16:16.982 count bytes template 00:16:16.982 6 48 /usr/src/fio/parse.c 00:16:16.982 1724 165504 /usr/src/fio/iolog.c 00:16:16.982 1 8 libtcmalloc_minimal.so 00:16:16.982 1 904 libcrypto.so 00:16:16.982 ----------------------------------------------------- 00:16:16.982 00:16:16.982 00:16:16.982 real 0m11.217s 00:16:16.982 user 0m27.372s 00:16:16.982 sys 0m19.621s 00:16:16.982 00:38:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.983 ************************************ 00:16:16.983 END TEST bdev_fio_rw_verify 00:16:16.983 ************************************ 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65d54555-d064-4c66-b94f-c4e817200d70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65d54555-d064-4c66-b94f-c4e817200d70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "bc0d5d20-06ed-4d83-aa09-fe890177416d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc0d5d20-06ed-4d83-aa09-fe890177416d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "570ac810-7562-4b42-a7e0-fa35bb166e20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "570ac810-7562-4b42-a7e0-fa35bb166e20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cd3edc68-5597-446f-be1a-f25034e3e134"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cd3edc68-5597-446f-be1a-f25034e3e134",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "alia 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:16.983 ses": [' ' "52ab9e5a-cf21-4429-9125-7e1cd6d518d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "52ab9e5a-cf21-4429-9125-7e1cd6d518d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cc320f1e-b690-4264-8528-a8f047c24481"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cc320f1e-b690-4264-8528-a8f047c24481",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:16.983 /home/vagrant/spdk_repo/spdk 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:16.983 00:16:16.983 real 0m11.404s 00:16:16.983 user 0m27.442s 00:16:16.983 sys 0m19.713s 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.983 00:38:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:16.983 ************************************ 00:16:16.983 END TEST bdev_fio 00:16:16.983 ************************************ 00:16:16.983 00:38:52 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:16.983 00:38:52 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:16.983 00:38:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:16.983 00:38:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.983 00:38:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.983 ************************************ 00:16:16.983 START TEST bdev_verify 00:16:16.983 ************************************ 00:16:16.983 00:38:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:16.983 [2024-11-27 00:38:52.697063] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:16.983 [2024-11-27 00:38:52.697202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84525 ] 00:16:16.983 [2024-11-27 00:38:52.856989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:16.983 [2024-11-27 00:38:52.888374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.983 [2024-11-27 00:38:52.888470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.983 Running I/O for 5 seconds... 00:16:18.867 22368.00 IOPS, 87.38 MiB/s [2024-11-27T00:38:56.595Z] 22032.00 IOPS, 86.06 MiB/s [2024-11-27T00:38:57.539Z] 22985.33 IOPS, 89.79 MiB/s [2024-11-27T00:38:58.482Z] 22688.00 IOPS, 88.62 MiB/s 00:16:21.695 Latency(us) 00:16:21.695 [2024-11-27T00:38:58.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.695 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0x80000 00:16:21.695 nvme0n1 : 5.05 1772.82 6.93 0.00 0.00 72083.80 10536.17 76626.71 00:16:21.695 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x80000 length 0x80000 00:16:21.695 nvme0n1 : 5.06 1920.82 7.50 0.00 0.00 66519.10 6654.42 70980.53 00:16:21.695 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0x80000 00:16:21.695 nvme0n2 : 5.04 1753.02 6.85 0.00 0.00 72775.94 11241.94 67754.14 00:16:21.695 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x80000 length 0x80000 00:16:21.695 nvme0n2 : 5.05 1901.23 7.43 0.00 0.00 67077.38 10536.17 68560.74 00:16:21.695 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0x80000 00:16:21.695 nvme0n3 : 5.04 1752.33 6.85 0.00 0.00 72674.41 13308.85 74206.92 00:16:21.695 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x80000 length 0x80000 00:16:21.695 nvme0n3 : 5.04 1906.00 7.45 0.00 0.00 66704.76 12048.54 67754.14 00:16:21.695 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0x20000 00:16:21.695 nvme1n1 : 5.05 1749.25 6.83 0.00 0.00 72604.30 12653.49 76626.71 00:16:21.695 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x20000 length 0x20000 00:16:21.695 nvme1n1 : 5.08 1914.84 7.48 0.00 0.00 66257.59 7208.96 62511.26 00:16:21.695 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0xbd0bd 00:16:21.695 nvme2n1 : 5.07 2361.45 9.22 0.00 0.00 53642.61 5343.70 65737.65 00:16:21.695 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:21.695 nvme2n1 : 5.08 2537.01 9.91 0.00 0.00 49873.28 2571.03 56058.49 00:16:21.695 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0x0 length 0xa0000 00:16:21.695 nvme3n1 : 5.06 1719.34 6.72 0.00 0.00 73579.55 6377.16 112923.57 00:16:21.695 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:21.695 Verification LBA range: start 0xa0000 length 0xa0000 00:16:21.695 nvme3n1 : 5.08 1588.95 6.21 0.00 0.00 79443.04 11342.77 94775.14 00:16:21.695 [2024-11-27T00:38:58.482Z] =================================================================================================================== 00:16:21.695 [2024-11-27T00:38:58.482Z] Total : 22877.08 89.36 0.00 0.00 66647.24 2571.03 112923.57 00:16:21.695 00:16:21.695 real 0m5.844s 00:16:21.695 user 0m9.230s 00:16:21.695 sys 0m1.556s 00:16:21.695 00:38:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.695 ************************************ 00:16:21.695 END TEST bdev_verify 00:16:21.695 ************************************ 00:16:21.695 00:38:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:21.958 00:38:58 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:21.958 00:38:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:21.958 00:38:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.958 00:38:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.958 ************************************ 00:16:21.958 START TEST bdev_verify_big_io 00:16:21.958 ************************************ 00:16:21.958 00:38:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:21.958 [2024-11-27 00:38:58.621004] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:21.958 [2024-11-27 00:38:58.621172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84614 ] 00:16:22.219 [2024-11-27 00:38:58.786210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.219 [2024-11-27 00:38:58.816514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.219 [2024-11-27 00:38:58.816627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.480 Running I/O for 5 seconds... 00:16:28.714 1857.00 IOPS, 116.06 MiB/s [2024-11-27T00:39:05.761Z] 2784.50 IOPS, 174.03 MiB/s 00:16:28.974 Latency(us) 00:16:28.974 [2024-11-27T00:39:05.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.974 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0x8000 00:16:28.974 nvme0n1 : 6.05 81.97 5.12 0.00 0.00 1502745.77 102841.11 1845493.76 00:16:28.974 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x8000 length 0x8000 00:16:28.974 nvme0n1 : 5.66 135.61 8.48 0.00 0.00 903525.28 5343.70 1006632.96 00:16:28.974 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0x8000 00:16:28.974 nvme0n2 : 6.05 56.83 3.55 0.00 0.00 1990563.97 30247.38 2852126.72 00:16:28.974 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x8000 length 0x8000 00:16:28.974 nvme0n2 : 5.83 120.78 7.55 0.00 0.00 998485.64 92355.35 1251838.42 00:16:28.974 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0x8000 00:16:28.974 nvme0n3 : 6.11 78.59 4.91 0.00 0.00 1389938.82 5520.15 1542213.32 00:16:28.974 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x8000 length 0x8000 00:16:28.974 nvme0n3 : 6.00 136.02 8.50 0.00 0.00 858233.46 124215.93 787238.60 00:16:28.974 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0x2000 00:16:28.974 nvme1n1 : 6.36 81.73 5.11 0.00 0.00 1253810.90 49202.41 1090519.04 00:16:28.974 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x2000 length 0x2000 00:16:28.974 nvme1n1 : 5.94 140.13 8.76 0.00 0.00 812845.63 122602.73 819502.47 00:16:28.974 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0xbd0b 00:16:28.974 nvme2n1 : 6.34 161.62 10.10 0.00 0.00 614637.00 13712.15 1703532.70 00:16:28.974 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:28.974 nvme2n1 : 6.00 144.27 9.02 0.00 0.00 760815.32 22786.36 1703532.70 00:16:28.974 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0x0 length 0xa000 00:16:28.974 nvme3n1 : 6.54 205.45 12.84 0.00 0.00 460410.42 724.68 3716798.62 00:16:28.974 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:28.974 Verification LBA range: start 0xa000 length 0xa000 00:16:28.974 nvme3n1 : 6.01 125.20 7.83 0.00 0.00 863908.83 1915.67 2361715.79 00:16:28.974 [2024-11-27T00:39:05.761Z] =================================================================================================================== 00:16:28.974 [2024-11-27T00:39:05.761Z] Total : 1468.23 91.76 0.00 0.00 903148.15 724.68 3716798.62 00:16:29.233 00:16:29.233 real 0m7.372s 00:16:29.233 user 0m13.568s 00:16:29.233 sys 0m0.457s 00:16:29.233 00:39:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.233 00:39:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.233 ************************************ 00:16:29.233 END TEST bdev_verify_big_io 00:16:29.233 ************************************ 00:16:29.233 00:39:05 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:29.233 00:39:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:29.233 00:39:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.233 00:39:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.233 ************************************ 00:16:29.233 START TEST bdev_write_zeroes 00:16:29.233 ************************************ 00:16:29.233 00:39:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:29.494 [2024-11-27 00:39:06.065070] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:29.494 [2024-11-27 00:39:06.065228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84718 ] 00:16:29.494 [2024-11-27 00:39:06.221843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.494 [2024-11-27 00:39:06.250556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.756 Running I/O for 1 seconds... 00:16:31.143 71008.00 IOPS, 277.38 MiB/s 00:16:31.143 Latency(us) 00:16:31.143 [2024-11-27T00:39:07.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.144 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme0n1 : 1.02 11569.41 45.19 0.00 0.00 11052.21 7864.32 24399.56 00:16:31.144 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme0n2 : 1.02 11554.91 45.14 0.00 0.00 11055.87 7965.14 23290.49 00:16:31.144 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme0n3 : 1.02 11540.96 45.08 0.00 0.00 11059.71 8065.97 22282.24 00:16:31.144 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme1n1 : 1.02 11526.68 45.03 0.00 0.00 11060.17 7864.32 21979.77 00:16:31.144 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme2n1 : 1.03 12666.89 49.48 0.00 0.00 10055.20 3402.83 19660.80 00:16:31.144 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:31.144 nvme3n1 : 1.03 11438.82 44.68 0.00 0.00 11095.18 7914.73 28835.84 00:16:31.144 [2024-11-27T00:39:07.931Z] =================================================================================================================== 00:16:31.144 [2024-11-27T00:39:07.931Z] Total : 70297.66 274.60 0.00 0.00 10881.79 3402.83 28835.84 00:16:31.144 00:16:31.144 real 0m1.778s 00:16:31.144 user 0m1.051s 00:16:31.144 sys 0m0.533s 00:16:31.144 ************************************ 00:16:31.144 END TEST bdev_write_zeroes 00:16:31.144 00:39:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.144 00:39:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:31.144 ************************************ 00:16:31.144 00:39:07 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.144 00:39:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:31.144 00:39:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.144 00:39:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.144 ************************************ 00:16:31.144 START TEST bdev_json_nonenclosed 00:16:31.144 ************************************ 00:16:31.144 00:39:07 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.144 [2024-11-27 00:39:07.916065] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:31.144 [2024-11-27 00:39:07.916230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84761 ] 00:16:31.406 [2024-11-27 00:39:08.080590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.406 [2024-11-27 00:39:08.110296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.406 [2024-11-27 00:39:08.110434] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:31.406 [2024-11-27 00:39:08.110458] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:31.406 [2024-11-27 00:39:08.110478] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.668 00:16:31.668 real 0m0.361s 00:16:31.668 user 0m0.138s 00:16:31.668 sys 0m0.117s 00:16:31.668 00:39:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.668 ************************************ 00:16:31.668 END TEST bdev_json_nonenclosed 00:16:31.668 ************************************ 00:16:31.668 00:39:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:31.668 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.668 00:39:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:31.668 00:39:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.668 00:39:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.668 ************************************ 00:16:31.668 START TEST bdev_json_nonarray 00:16:31.668 ************************************ 00:16:31.668 00:39:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.668 [2024-11-27 00:39:08.332932] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:31.668 [2024-11-27 00:39:08.333085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84787 ] 00:16:31.929 [2024-11-27 00:39:08.497824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.929 [2024-11-27 00:39:08.526510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.929 [2024-11-27 00:39:08.526653] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:31.929 [2024-11-27 00:39:08.526674] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:31.929 [2024-11-27 00:39:08.526688] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.929 00:16:31.929 real 0m0.347s 00:16:31.929 user 0m0.142s 00:16:31.929 sys 0m0.100s 00:16:31.929 00:39:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.929 00:39:08 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:31.929 ************************************ 00:16:31.929 END TEST bdev_json_nonarray 00:16:31.929 ************************************ 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:31.929 00:39:08 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:32.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.803 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.803 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.803 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.803 00:16:35.803 real 0m46.701s 00:16:35.803 user 1m13.688s 00:16:35.803 sys 0m32.904s 00:16:35.803 00:39:12 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.803 ************************************ 00:16:35.803 END TEST blockdev_xnvme 00:16:35.803 ************************************ 00:16:35.803 00:39:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.803 00:39:12 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:35.803 00:39:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.803 00:39:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.803 00:39:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.803 ************************************ 00:16:35.803 START TEST ublk 00:16:35.803 ************************************ 00:16:35.803 00:39:12 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:35.803 * Looking for test storage... 00:16:35.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:35.803 00:39:12 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:35.803 00:39:12 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:35.803 00:39:12 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:35.803 00:39:12 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.803 00:39:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.803 00:39:12 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.803 00:39:12 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.803 00:39:12 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.803 00:39:12 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.803 00:39:12 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:35.803 00:39:12 ublk -- scripts/common.sh@345 -- # : 1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.803 00:39:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.803 00:39:12 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@353 -- # local d=1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.803 00:39:12 ublk -- scripts/common.sh@355 -- # echo 1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.803 00:39:12 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@353 -- # local d=2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.803 00:39:12 ublk -- scripts/common.sh@355 -- # echo 2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.803 00:39:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.803 00:39:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.803 00:39:12 ublk -- scripts/common.sh@368 -- # return 0 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:35.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.804 --rc genhtml_branch_coverage=1 00:16:35.804 --rc genhtml_function_coverage=1 00:16:35.804 --rc genhtml_legend=1 00:16:35.804 --rc geninfo_all_blocks=1 00:16:35.804 --rc geninfo_unexecuted_blocks=1 00:16:35.804 00:16:35.804 ' 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:35.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.804 --rc genhtml_branch_coverage=1 00:16:35.804 --rc genhtml_function_coverage=1 00:16:35.804 --rc genhtml_legend=1 00:16:35.804 --rc geninfo_all_blocks=1 00:16:35.804 --rc geninfo_unexecuted_blocks=1 00:16:35.804 00:16:35.804 ' 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:35.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.804 --rc genhtml_branch_coverage=1 00:16:35.804 --rc genhtml_function_coverage=1 00:16:35.804 --rc genhtml_legend=1 00:16:35.804 --rc geninfo_all_blocks=1 00:16:35.804 --rc geninfo_unexecuted_blocks=1 00:16:35.804 00:16:35.804 ' 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:35.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.804 --rc genhtml_branch_coverage=1 00:16:35.804 --rc genhtml_function_coverage=1 00:16:35.804 --rc genhtml_legend=1 00:16:35.804 --rc geninfo_all_blocks=1 00:16:35.804 --rc geninfo_unexecuted_blocks=1 00:16:35.804 00:16:35.804 ' 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:35.804 00:39:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:35.804 00:39:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:35.804 00:39:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:35.804 00:39:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:35.804 00:39:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:35.804 00:39:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:35.804 00:39:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:35.804 00:39:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:35.804 00:39:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.804 00:39:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.804 ************************************ 00:16:35.804 START TEST test_save_ublk_config 00:16:35.804 ************************************ 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=85073 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 85073 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85073 ']' 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.804 00:39:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:35.804 [2024-11-27 00:39:12.426941] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:35.804 [2024-11-27 00:39:12.427086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85073 ] 00:16:36.103 [2024-11-27 00:39:12.589543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.103 [2024-11-27 00:39:12.618420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:36.673 [2024-11-27 00:39:13.279877] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:36.673 [2024-11-27 00:39:13.280811] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:36.673 malloc0 00:16:36.673 [2024-11-27 00:39:13.311993] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:36.673 [2024-11-27 00:39:13.312101] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:36.673 [2024-11-27 00:39:13.312110] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:36.673 [2024-11-27 00:39:13.312128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.673 [2024-11-27 00:39:13.320976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.673 [2024-11-27 00:39:13.321021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.673 [2024-11-27 00:39:13.327904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.673 [2024-11-27 00:39:13.328032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:36.673 [2024-11-27 00:39:13.344883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.673 0 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.673 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:36.932 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.932 00:39:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:36.932 "subsystems": [ 00:16:36.932 { 00:16:36.932 "subsystem": "fsdev", 00:16:36.932 "config": [ 00:16:36.932 { 00:16:36.932 "method": "fsdev_set_opts", 00:16:36.932 "params": { 00:16:36.933 "fsdev_io_pool_size": 65535, 00:16:36.933 "fsdev_io_cache_size": 256 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "keyring", 00:16:36.933 "config": [] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "iobuf", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "iobuf_set_options", 00:16:36.933 "params": { 00:16:36.933 "small_pool_count": 8192, 00:16:36.933 "large_pool_count": 1024, 00:16:36.933 "small_bufsize": 8192, 00:16:36.933 "large_bufsize": 135168, 00:16:36.933 "enable_numa": false 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "sock", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "sock_set_default_impl", 00:16:36.933 "params": { 00:16:36.933 "impl_name": "posix" 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "sock_impl_set_options", 00:16:36.933 "params": { 00:16:36.933 "impl_name": "ssl", 00:16:36.933 "recv_buf_size": 4096, 00:16:36.933 "send_buf_size": 4096, 00:16:36.933 "enable_recv_pipe": true, 00:16:36.933 "enable_quickack": false, 00:16:36.933 "enable_placement_id": 0, 00:16:36.933 "enable_zerocopy_send_server": true, 00:16:36.933 "enable_zerocopy_send_client": false, 00:16:36.933 "zerocopy_threshold": 0, 00:16:36.933 "tls_version": 0, 00:16:36.933 "enable_ktls": false 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "sock_impl_set_options", 00:16:36.933 "params": { 00:16:36.933 "impl_name": "posix", 00:16:36.933 "recv_buf_size": 2097152, 00:16:36.933 "send_buf_size": 2097152, 00:16:36.933 "enable_recv_pipe": true, 00:16:36.933 "enable_quickack": false, 00:16:36.933 "enable_placement_id": 0, 00:16:36.933 "enable_zerocopy_send_server": true, 00:16:36.933 "enable_zerocopy_send_client": false, 00:16:36.933 "zerocopy_threshold": 0, 00:16:36.933 "tls_version": 0, 00:16:36.933 "enable_ktls": false 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "vmd", 00:16:36.933 "config": [] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "accel", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "accel_set_options", 00:16:36.933 "params": { 00:16:36.933 "small_cache_size": 128, 00:16:36.933 "large_cache_size": 16, 00:16:36.933 "task_count": 2048, 00:16:36.933 "sequence_count": 2048, 00:16:36.933 "buf_count": 2048 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "bdev", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "bdev_set_options", 00:16:36.933 "params": { 00:16:36.933 "bdev_io_pool_size": 65535, 00:16:36.933 "bdev_io_cache_size": 256, 00:16:36.933 "bdev_auto_examine": true, 00:16:36.933 "iobuf_small_cache_size": 128, 00:16:36.933 "iobuf_large_cache_size": 16 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_raid_set_options", 00:16:36.933 "params": { 00:16:36.933 "process_window_size_kb": 1024, 00:16:36.933 "process_max_bandwidth_mb_sec": 0 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_iscsi_set_options", 00:16:36.933 "params": { 00:16:36.933 "timeout_sec": 30 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_nvme_set_options", 00:16:36.933 "params": { 00:16:36.933 "action_on_timeout": "none", 00:16:36.933 "timeout_us": 0, 00:16:36.933 "timeout_admin_us": 0, 00:16:36.933 "keep_alive_timeout_ms": 10000, 00:16:36.933 "arbitration_burst": 0, 00:16:36.933 "low_priority_weight": 0, 00:16:36.933 "medium_priority_weight": 0, 00:16:36.933 "high_priority_weight": 0, 00:16:36.933 "nvme_adminq_poll_period_us": 10000, 00:16:36.933 "nvme_ioq_poll_period_us": 0, 00:16:36.933 "io_queue_requests": 0, 00:16:36.933 "delay_cmd_submit": true, 00:16:36.933 "transport_retry_count": 4, 00:16:36.933 "bdev_retry_count": 3, 00:16:36.933 "transport_ack_timeout": 0, 00:16:36.933 "ctrlr_loss_timeout_sec": 0, 00:16:36.933 "reconnect_delay_sec": 0, 00:16:36.933 "fast_io_fail_timeout_sec": 0, 00:16:36.933 "disable_auto_failback": false, 00:16:36.933 "generate_uuids": false, 00:16:36.933 "transport_tos": 0, 00:16:36.933 "nvme_error_stat": false, 00:16:36.933 "rdma_srq_size": 0, 00:16:36.933 "io_path_stat": false, 00:16:36.933 "allow_accel_sequence": false, 00:16:36.933 "rdma_max_cq_size": 0, 00:16:36.933 "rdma_cm_event_timeout_ms": 0, 00:16:36.933 "dhchap_digests": [ 00:16:36.933 "sha256", 00:16:36.933 "sha384", 00:16:36.933 "sha512" 00:16:36.933 ], 00:16:36.933 "dhchap_dhgroups": [ 00:16:36.933 "null", 00:16:36.933 "ffdhe2048", 00:16:36.933 "ffdhe3072", 00:16:36.933 "ffdhe4096", 00:16:36.933 "ffdhe6144", 00:16:36.933 "ffdhe8192" 00:16:36.933 ] 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_nvme_set_hotplug", 00:16:36.933 "params": { 00:16:36.933 "period_us": 100000, 00:16:36.933 "enable": false 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_malloc_create", 00:16:36.933 "params": { 00:16:36.933 "name": "malloc0", 00:16:36.933 "num_blocks": 8192, 00:16:36.933 "block_size": 4096, 00:16:36.933 "physical_block_size": 4096, 00:16:36.933 "uuid": "2e179489-43fb-417a-b69a-55827990eb88", 00:16:36.933 "optimal_io_boundary": 0, 00:16:36.933 "md_size": 0, 00:16:36.933 "dif_type": 0, 00:16:36.933 "dif_is_head_of_md": false, 00:16:36.933 "dif_pi_format": 0 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "bdev_wait_for_examine" 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "scsi", 00:16:36.933 "config": null 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "scheduler", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "framework_set_scheduler", 00:16:36.933 "params": { 00:16:36.933 "name": "static" 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "vhost_scsi", 00:16:36.933 "config": [] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "vhost_blk", 00:16:36.933 "config": [] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "ublk", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "ublk_create_target", 00:16:36.933 "params": { 00:16:36.933 "cpumask": "1" 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "ublk_start_disk", 00:16:36.933 "params": { 00:16:36.933 "bdev_name": "malloc0", 00:16:36.933 "ublk_id": 0, 00:16:36.933 "num_queues": 1, 00:16:36.933 "queue_depth": 128 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "nbd", 00:16:36.933 "config": [] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "nvmf", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "nvmf_set_config", 00:16:36.933 "params": { 00:16:36.933 "discovery_filter": "match_any", 00:16:36.933 "admin_cmd_passthru": { 00:16:36.933 "identify_ctrlr": false 00:16:36.933 }, 00:16:36.933 "dhchap_digests": [ 00:16:36.933 "sha256", 00:16:36.933 "sha384", 00:16:36.933 "sha512" 00:16:36.933 ], 00:16:36.933 "dhchap_dhgroups": [ 00:16:36.933 "null", 00:16:36.933 "ffdhe2048", 00:16:36.933 "ffdhe3072", 00:16:36.933 "ffdhe4096", 00:16:36.933 "ffdhe6144", 00:16:36.933 "ffdhe8192" 00:16:36.933 ] 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "nvmf_set_max_subsystems", 00:16:36.933 "params": { 00:16:36.933 "max_subsystems": 1024 00:16:36.933 } 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "method": "nvmf_set_crdt", 00:16:36.933 "params": { 00:16:36.933 "crdt1": 0, 00:16:36.933 "crdt2": 0, 00:16:36.933 "crdt3": 0 00:16:36.933 } 00:16:36.933 } 00:16:36.933 ] 00:16:36.933 }, 00:16:36.933 { 00:16:36.933 "subsystem": "iscsi", 00:16:36.933 "config": [ 00:16:36.933 { 00:16:36.933 "method": "iscsi_set_options", 00:16:36.933 "params": { 00:16:36.933 "node_base": "iqn.2016-06.io.spdk", 00:16:36.933 "max_sessions": 128, 00:16:36.933 "max_connections_per_session": 2, 00:16:36.933 "max_queue_depth": 64, 00:16:36.933 "default_time2wait": 2, 00:16:36.933 "default_time2retain": 20, 00:16:36.933 "first_burst_length": 8192, 00:16:36.933 "immediate_data": true, 00:16:36.933 "allow_duplicated_isid": false, 00:16:36.933 "error_recovery_level": 0, 00:16:36.933 "nop_timeout": 60, 00:16:36.933 "nop_in_interval": 30, 00:16:36.933 "disable_chap": false, 00:16:36.933 "require_chap": false, 00:16:36.933 "mutual_chap": false, 00:16:36.933 "chap_group": 0, 00:16:36.934 "max_large_datain_per_connection": 64, 00:16:36.934 "max_r2t_per_connection": 4, 00:16:36.934 "pdu_pool_size": 36864, 00:16:36.934 "immediate_data_pool_size": 16384, 00:16:36.934 "data_out_pool_size": 2048 00:16:36.934 } 00:16:36.934 } 00:16:36.934 ] 00:16:36.934 } 00:16:36.934 ] 00:16:36.934 }' 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 85073 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85073 ']' 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85073 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85073 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.934 killing process with pid 85073 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85073' 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85073 00:16:36.934 00:39:13 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85073 00:16:37.194 [2024-11-27 00:39:13.953037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.454 [2024-11-27 00:39:13.998907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.454 [2024-11-27 00:39:13.999063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.454 [2024-11-27 00:39:14.007898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.454 [2024-11-27 00:39:14.007970] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:37.454 [2024-11-27 00:39:14.007979] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:37.454 [2024-11-27 00:39:14.008008] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:37.454 [2024-11-27 00:39:14.008162] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=85111 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 85111 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85111 ']' 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:37.715 00:39:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:37.715 "subsystems": [ 00:16:37.715 { 00:16:37.715 "subsystem": "fsdev", 00:16:37.715 "config": [ 00:16:37.715 { 00:16:37.715 "method": "fsdev_set_opts", 00:16:37.715 "params": { 00:16:37.715 "fsdev_io_pool_size": 65535, 00:16:37.715 "fsdev_io_cache_size": 256 00:16:37.715 } 00:16:37.715 } 00:16:37.715 ] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "keyring", 00:16:37.715 "config": [] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "iobuf", 00:16:37.715 "config": [ 00:16:37.715 { 00:16:37.715 "method": "iobuf_set_options", 00:16:37.715 "params": { 00:16:37.715 "small_pool_count": 8192, 00:16:37.715 "large_pool_count": 1024, 00:16:37.715 "small_bufsize": 8192, 00:16:37.715 "large_bufsize": 135168, 00:16:37.715 "enable_numa": false 00:16:37.715 } 00:16:37.715 } 00:16:37.715 ] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "sock", 00:16:37.715 "config": [ 00:16:37.715 { 00:16:37.715 "method": "sock_set_default_impl", 00:16:37.715 "params": { 00:16:37.715 "impl_name": "posix" 00:16:37.715 } 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "method": "sock_impl_set_options", 00:16:37.715 "params": { 00:16:37.715 "impl_name": "ssl", 00:16:37.715 "recv_buf_size": 4096, 00:16:37.715 "send_buf_size": 4096, 00:16:37.715 "enable_recv_pipe": true, 00:16:37.715 "enable_quickack": false, 00:16:37.715 "enable_placement_id": 0, 00:16:37.715 "enable_zerocopy_send_server": true, 00:16:37.715 "enable_zerocopy_send_client": false, 00:16:37.715 "zerocopy_threshold": 0, 00:16:37.715 "tls_version": 0, 00:16:37.715 "enable_ktls": false 00:16:37.715 } 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "method": "sock_impl_set_options", 00:16:37.715 "params": { 00:16:37.715 "impl_name": "posix", 00:16:37.715 "recv_buf_size": 2097152, 00:16:37.715 "send_buf_size": 2097152, 00:16:37.715 "enable_recv_pipe": true, 00:16:37.715 "enable_quickack": false, 00:16:37.715 "enable_placement_id": 0, 00:16:37.715 "enable_zerocopy_send_server": true, 00:16:37.715 "enable_zerocopy_send_client": false, 00:16:37.715 "zerocopy_threshold": 0, 00:16:37.715 "tls_version": 0, 00:16:37.715 "enable_ktls": false 00:16:37.715 } 00:16:37.715 } 00:16:37.715 ] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "vmd", 00:16:37.715 "config": [] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "accel", 00:16:37.715 "config": [ 00:16:37.715 { 00:16:37.715 "method": "accel_set_options", 00:16:37.715 "params": { 00:16:37.715 "small_cache_size": 128, 00:16:37.715 "large_cache_size": 16, 00:16:37.715 "task_count": 2048, 00:16:37.715 "sequence_count": 2048, 00:16:37.715 "buf_count": 2048 00:16:37.715 } 00:16:37.715 } 00:16:37.715 ] 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "subsystem": "bdev", 00:16:37.715 "config": [ 00:16:37.715 { 00:16:37.715 "method": "bdev_set_options", 00:16:37.715 "params": { 00:16:37.715 "bdev_io_pool_size": 65535, 00:16:37.715 "bdev_io_cache_size": 256, 00:16:37.715 "bdev_auto_examine": true, 00:16:37.715 "iobuf_small_cache_size": 128, 00:16:37.715 "iobuf_large_cache_size": 16 00:16:37.715 } 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "method": "bdev_raid_set_options", 00:16:37.715 "params": { 00:16:37.715 "process_window_size_kb": 1024, 00:16:37.715 "process_max_bandwidth_mb_sec": 0 00:16:37.715 } 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "method": "bdev_iscsi_set_options", 00:16:37.715 "params": { 00:16:37.715 "timeout_sec": 30 00:16:37.715 } 00:16:37.715 }, 00:16:37.715 { 00:16:37.715 "method": "bdev_nvme_set_options", 00:16:37.715 "params": { 00:16:37.716 "action_on_timeout": "none", 00:16:37.716 "timeout_us": 0, 00:16:37.716 "timeout_admin_us": 0, 00:16:37.716 "keep_alive_timeout_ms": 10000, 00:16:37.716 "arbitration_burst": 0, 00:16:37.716 "low_priority_weight": 0, 00:16:37.716 "medium_priority_weight": 0, 00:16:37.716 "high_priority_weight": 0, 00:16:37.716 "nvme_adminq_poll_period_us": 10000, 00:16:37.716 "nvme_ioq_poll_period_us": 0, 00:16:37.716 "io_queue_requests": 0, 00:16:37.716 "delay_cmd_submit": true, 00:16:37.716 "transport_retry_count": 4, 00:16:37.716 "bdev_retry_count": 3, 00:16:37.716 "transport_ack_timeout": 0, 00:16:37.716 "ctrlr_loss_timeout_sec": 0, 00:16:37.716 "reconnect_delay_sec": 0, 00:16:37.716 "fast_io_fail_timeout_sec": 0, 00:16:37.716 "disable_auto_failback": false, 00:16:37.716 "generate_uuids": false, 00:16:37.716 "transport_tos": 0, 00:16:37.716 "nvme_error_stat": false, 00:16:37.716 "rdma_srq_size": 0, 00:16:37.716 "io_path_stat": false, 00:16:37.716 "allow_accel_sequence": false, 00:16:37.716 "rdma_max_cq_size": 0, 00:16:37.716 "rdma_cm_event_timeout_ms": 0, 00:16:37.716 "dhchap_digests": [ 00:16:37.716 "sha256", 00:16:37.716 "sha384", 00:16:37.716 "sha512" 00:16:37.716 ], 00:16:37.716 "dhchap_dhgroups": [ 00:16:37.716 "null", 00:16:37.716 "ffdhe2048", 00:16:37.716 "ffdhe3072", 00:16:37.716 "ffdhe4096", 00:16:37.716 "ffdhe6144", 00:16:37.716 "ffdhe8192" 00:16:37.716 ] 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "bdev_nvme_set_hotplug", 00:16:37.716 "params": { 00:16:37.716 "period_us": 100000, 00:16:37.716 "enable": false 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "bdev_malloc_create", 00:16:37.716 "params": { 00:16:37.716 "name": "malloc0", 00:16:37.716 "num_blocks": 8192, 00:16:37.716 "block_size": 4096, 00:16:37.716 "physical_block_size": 4096, 00:16:37.716 "uuid": "2e179489-43fb-417a-b69a-55827990eb88", 00:16:37.716 "optimal_io_boundary": 0, 00:16:37.716 "md_size": 0, 00:16:37.716 "dif_type": 0, 00:16:37.716 "dif_is_head_of_md": false, 00:16:37.716 "dif_pi_format": 0 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "bdev_wait_for_examine" 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "scsi", 00:16:37.716 "config": null 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "scheduler", 00:16:37.716 "config": [ 00:16:37.716 { 00:16:37.716 "method": "framework_set_scheduler", 00:16:37.716 "params": { 00:16:37.716 "name": "static" 00:16:37.716 } 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "vhost_scsi", 00:16:37.716 "config": [] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "vhost_blk", 00:16:37.716 "config": [] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "ublk", 00:16:37.716 "config": [ 00:16:37.716 { 00:16:37.716 "method": "ublk_create_target", 00:16:37.716 "params": { 00:16:37.716 "cpumask": "1" 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "ublk_start_disk", 00:16:37.716 "params": { 00:16:37.716 "bdev_name": "malloc0", 00:16:37.716 "ublk_id": 0, 00:16:37.716 "num_queues": 1, 00:16:37.716 "queue_depth": 128 00:16:37.716 } 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "nbd", 00:16:37.716 "config": [] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "nvmf", 00:16:37.716 "config": [ 00:16:37.716 { 00:16:37.716 "method": "nvmf_set_config", 00:16:37.716 "params": { 00:16:37.716 "discovery_filter": "match_any", 00:16:37.716 "admin_cmd_passthru": { 00:16:37.716 "identify_ctrlr": false 00:16:37.716 }, 00:16:37.716 "dhchap_digests": [ 00:16:37.716 "sha256", 00:16:37.716 "sha384", 00:16:37.716 "sha512" 00:16:37.716 ], 00:16:37.716 "dhchap_dhgroups": [ 00:16:37.716 "null", 00:16:37.716 "ffdhe2048", 00:16:37.716 "ffdhe3072", 00:16:37.716 "ffdhe4096", 00:16:37.716 "ffdhe6144", 00:16:37.716 "ffdhe8192" 00:16:37.716 ] 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "nvmf_set_max_subsystems", 00:16:37.716 "params": { 00:16:37.716 "max_subsystems": 1024 00:16:37.716 } 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "method": "nvmf_set_crdt", 00:16:37.716 "params": { 00:16:37.716 "crdt1": 0, 00:16:37.716 "crdt2": 0, 00:16:37.716 "crdt3": 0 00:16:37.716 } 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 }, 00:16:37.716 { 00:16:37.716 "subsystem": "iscsi", 00:16:37.716 "config": [ 00:16:37.716 { 00:16:37.716 "method": "iscsi_set_options", 00:16:37.716 "params": { 00:16:37.716 "node_base": "iqn.2016-06.io.spdk", 00:16:37.716 "max_sessions": 128, 00:16:37.716 "max_connections_per_session": 2, 00:16:37.716 "max_queue_depth": 64, 00:16:37.716 "default_time2wait": 2, 00:16:37.716 "default_time2retain": 20, 00:16:37.716 "first_burst_length": 8192, 00:16:37.716 "immediate_data": true, 00:16:37.716 "allow_duplicated_isid": false, 00:16:37.716 "error_recovery_level": 0, 00:16:37.716 "nop_timeout": 60, 00:16:37.716 "nop_in_interval": 30, 00:16:37.716 "disable_chap": false, 00:16:37.716 "require_chap": false, 00:16:37.716 "mutual_chap": false, 00:16:37.716 "chap_group": 0, 00:16:37.716 "max_large_datain_per_connection": 64, 00:16:37.716 "max_r2t_per_connection": 4, 00:16:37.716 "pdu_pool_size": 36864, 00:16:37.716 "immediate_data_pool_size": 16384, 00:16:37.716 "data_out_pool_size": 2048 00:16:37.716 } 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 } 00:16:37.716 ] 00:16:37.716 }' 00:16:37.977 [2024-11-27 00:39:14.539749] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:37.977 [2024-11-27 00:39:14.539910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85111 ] 00:16:37.977 [2024-11-27 00:39:14.703812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.977 [2024-11-27 00:39:14.732176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.549 [2024-11-27 00:39:15.129875] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:38.549 [2024-11-27 00:39:15.130270] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:38.549 [2024-11-27 00:39:15.138010] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:38.549 [2024-11-27 00:39:15.138104] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:38.549 [2024-11-27 00:39:15.138113] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:38.549 [2024-11-27 00:39:15.138123] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.549 [2024-11-27 00:39:15.146978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.549 [2024-11-27 00:39:15.147017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.549 [2024-11-27 00:39:15.153891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.549 [2024-11-27 00:39:15.153995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:38.549 [2024-11-27 00:39:15.170875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 85111 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85111 ']' 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85111 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85111 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.809 killing process with pid 85111 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.809 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85111' 00:16:38.810 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85111 00:16:38.810 00:39:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85111 00:16:39.070 [2024-11-27 00:39:15.748285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.070 [2024-11-27 00:39:15.784989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.070 [2024-11-27 00:39:15.785132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.070 [2024-11-27 00:39:15.793901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.070 [2024-11-27 00:39:15.793972] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.070 [2024-11-27 00:39:15.793982] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.070 [2024-11-27 00:39:15.794020] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.070 [2024-11-27 00:39:15.794185] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.641 00:39:16 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:39.641 00:16:39.641 real 0m3.913s 00:16:39.641 user 0m2.642s 00:16:39.641 sys 0m1.945s 00:16:39.641 00:39:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.641 ************************************ 00:16:39.641 END TEST test_save_ublk_config 00:16:39.641 ************************************ 00:16:39.641 00:39:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:39.641 00:39:16 ublk -- ublk/ublk.sh@139 -- # spdk_pid=85163 00:16:39.641 00:39:16 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:39.641 00:39:16 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.641 00:39:16 ublk -- ublk/ublk.sh@141 -- # waitforlisten 85163 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@835 -- # '[' -z 85163 ']' 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.641 00:39:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.641 [2024-11-27 00:39:16.391559] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:39.641 [2024-11-27 00:39:16.391706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85163 ] 00:16:39.902 [2024-11-27 00:39:16.549547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.902 [2024-11-27 00:39:16.581633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.902 [2024-11-27 00:39:16.581700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.847 00:39:17 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.847 00:39:17 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:40.847 00:39:17 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:40.847 00:39:17 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.848 00:39:17 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.848 00:39:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 ************************************ 00:16:40.848 START TEST test_create_ublk 00:16:40.848 ************************************ 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 [2024-11-27 00:39:17.318880] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:40.848 [2024-11-27 00:39:17.320808] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 [2024-11-27 00:39:17.421055] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:40.848 [2024-11-27 00:39:17.421546] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:40.848 [2024-11-27 00:39:17.421565] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:40.848 [2024-11-27 00:39:17.421575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:40.848 [2024-11-27 00:39:17.430206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:40.848 [2024-11-27 00:39:17.430254] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:40.848 [2024-11-27 00:39:17.436990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:40.848 [2024-11-27 00:39:17.438581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:40.848 [2024-11-27 00:39:17.464913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.848 00:39:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:40.848 { 00:16:40.848 "ublk_device": "/dev/ublkb0", 00:16:40.848 "id": 0, 00:16:40.848 "queue_depth": 512, 00:16:40.848 "num_queues": 4, 00:16:40.848 "bdev_name": "Malloc0" 00:16:40.848 } 00:16:40.848 ]' 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:40.848 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:41.109 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:41.109 00:39:17 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:41.109 00:39:17 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:41.110 00:39:17 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:41.110 fio: verification read phase will never start because write phase uses all of runtime 00:16:41.110 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:41.110 fio-3.35 00:16:41.110 Starting 1 process 00:16:51.154 00:16:51.154 fio_test: (groupid=0, jobs=1): err= 0: pid=85207: Wed Nov 27 00:39:27 2024 00:16:51.154 write: IOPS=13.0k, BW=50.9MiB/s (53.4MB/s)(509MiB/10001msec); 0 zone resets 00:16:51.154 clat (usec): min=30, max=7261, avg=75.93, stdev=128.75 00:16:51.154 lat (usec): min=31, max=7261, avg=76.41, stdev=128.87 00:16:51.154 clat percentiles (usec): 00:16:51.154 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:16:51.154 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 67], 00:16:51.154 | 70.00th=[ 68], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 123], 00:16:51.154 | 99.00th=[ 243], 99.50th=[ 277], 99.90th=[ 2835], 99.95th=[ 3621], 00:16:51.154 | 99.99th=[ 4015] 00:16:51.154 bw ( KiB/s): min= 8616, max=60048, per=99.43%, avg=51808.84, stdev=15139.57, samples=19 00:16:51.154 iops : min= 2154, max=15012, avg=12952.21, stdev=3784.89, samples=19 00:16:51.154 lat (usec) : 50=0.09%, 100=91.99%, 250=7.03%, 500=0.68%, 750=0.01% 00:16:51.154 lat (usec) : 1000=0.01% 00:16:51.154 lat (msec) : 2=0.05%, 4=0.12%, 10=0.01% 00:16:51.154 cpu : usr=2.16%, sys=10.63%, ctx=130281, majf=0, minf=798 00:16:51.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.155 issued rwts: total=0,130277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.155 00:16:51.155 Run status group 0 (all jobs): 00:16:51.155 WRITE: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=509MiB (534MB), run=10001-10001msec 00:16:51.155 00:16:51.155 Disk stats (read/write): 00:16:51.155 ublkb0: ios=0/128754, merge=0/0, ticks=0/8536, in_queue=8537, util=99.09% 00:16:51.155 00:39:27 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:51.155 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.155 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.155 [2024-11-27 00:39:27.882589] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:51.155 [2024-11-27 00:39:27.927909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:51.155 [2024-11-27 00:39:27.928616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:51.155 [2024-11-27 00:39:27.935891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:51.155 [2024-11-27 00:39:27.936166] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:51.155 [2024-11-27 00:39:27.936188] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:27 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 [2024-11-27 00:39:27.951958] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:51.414 request: 00:16:51.414 { 00:16:51.414 "ublk_id": 0, 00:16:51.414 "method": "ublk_stop_disk", 00:16:51.414 "req_id": 1 00:16:51.414 } 00:16:51.414 Got JSON-RPC error response 00:16:51.414 response: 00:16:51.414 { 00:16:51.414 "code": -19, 00:16:51.414 "message": "No such device" 00:16:51.414 } 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.414 00:39:27 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 [2024-11-27 00:39:27.967933] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.414 [2024-11-27 00:39:27.969231] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:51.414 [2024-11-27 00:39:27.969260] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:27 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:28 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:51.414 00:39:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:51.414 00:16:51.414 real 0m10.813s 00:16:51.414 user 0m0.519s 00:16:51.414 sys 0m1.148s 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.414 00:39:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 ************************************ 00:16:51.414 END TEST test_create_ublk 00:16:51.414 ************************************ 00:16:51.414 00:39:28 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:51.414 00:39:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.414 00:39:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.414 00:39:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 ************************************ 00:16:51.414 START TEST test_create_multi_ublk 00:16:51.414 ************************************ 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.414 [2024-11-27 00:39:28.171870] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.414 [2024-11-27 00:39:28.172753] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.414 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.673 [2024-11-27 00:39:28.250994] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:51.673 [2024-11-27 00:39:28.251312] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:51.673 [2024-11-27 00:39:28.251327] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:51.673 [2024-11-27 00:39:28.251332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.673 [2024-11-27 00:39:28.263912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.673 [2024-11-27 00:39:28.263930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.673 [2024-11-27 00:39:28.275901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.673 [2024-11-27 00:39:28.276416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:51.673 [2024-11-27 00:39:28.302886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.673 [2024-11-27 00:39:28.386972] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:51.673 [2024-11-27 00:39:28.387282] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:51.673 [2024-11-27 00:39:28.387295] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.673 [2024-11-27 00:39:28.387301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.673 [2024-11-27 00:39:28.398894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.673 [2024-11-27 00:39:28.398915] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.673 [2024-11-27 00:39:28.409903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.673 [2024-11-27 00:39:28.410415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:51.673 [2024-11-27 00:39:28.434882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.673 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.932 [2024-11-27 00:39:28.517973] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:51.932 [2024-11-27 00:39:28.518280] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:51.932 [2024-11-27 00:39:28.518294] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:51.932 [2024-11-27 00:39:28.518299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.932 [2024-11-27 00:39:28.531051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.932 [2024-11-27 00:39:28.531068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.932 [2024-11-27 00:39:28.541881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.932 [2024-11-27 00:39:28.542373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:51.932 [2024-11-27 00:39:28.566879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.932 [2024-11-27 00:39:28.649964] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:51.932 [2024-11-27 00:39:28.650275] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:51.932 [2024-11-27 00:39:28.650287] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:51.932 [2024-11-27 00:39:28.650294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.932 [2024-11-27 00:39:28.663049] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.932 [2024-11-27 00:39:28.663070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.932 [2024-11-27 00:39:28.673878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.932 [2024-11-27 00:39:28.674365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:51.932 [2024-11-27 00:39:28.698879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.932 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:52.191 { 00:16:52.191 "ublk_device": "/dev/ublkb0", 00:16:52.191 "id": 0, 00:16:52.191 "queue_depth": 512, 00:16:52.191 "num_queues": 4, 00:16:52.191 "bdev_name": "Malloc0" 00:16:52.191 }, 00:16:52.191 { 00:16:52.191 "ublk_device": "/dev/ublkb1", 00:16:52.191 "id": 1, 00:16:52.191 "queue_depth": 512, 00:16:52.191 "num_queues": 4, 00:16:52.191 "bdev_name": "Malloc1" 00:16:52.191 }, 00:16:52.191 { 00:16:52.191 "ublk_device": "/dev/ublkb2", 00:16:52.191 "id": 2, 00:16:52.191 "queue_depth": 512, 00:16:52.191 "num_queues": 4, 00:16:52.191 "bdev_name": "Malloc2" 00:16:52.191 }, 00:16:52.191 { 00:16:52.191 "ublk_device": "/dev/ublkb3", 00:16:52.191 "id": 3, 00:16:52.191 "queue_depth": 512, 00:16:52.191 "num_queues": 4, 00:16:52.191 "bdev_name": "Malloc3" 00:16:52.191 } 00:16:52.191 ]' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:52.191 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:52.450 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.450 00:39:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.450 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:52.708 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.709 [2024-11-27 00:39:29.373945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.709 [2024-11-27 00:39:29.413282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.709 [2024-11-27 00:39:29.414551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.709 [2024-11-27 00:39:29.420896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.709 [2024-11-27 00:39:29.421140] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:52.709 [2024-11-27 00:39:29.421151] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.709 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.709 [2024-11-27 00:39:29.436942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.709 [2024-11-27 00:39:29.476916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.709 [2024-11-27 00:39:29.477751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.709 [2024-11-27 00:39:29.484893] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.709 [2024-11-27 00:39:29.485140] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:52.709 [2024-11-27 00:39:29.485151] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.967 [2024-11-27 00:39:29.500943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.967 [2024-11-27 00:39:29.542394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.967 [2024-11-27 00:39:29.543423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.967 [2024-11-27 00:39:29.548881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.967 [2024-11-27 00:39:29.549108] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:52.967 [2024-11-27 00:39:29.549122] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.967 [2024-11-27 00:39:29.564934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.967 [2024-11-27 00:39:29.603902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.967 [2024-11-27 00:39:29.604590] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.967 [2024-11-27 00:39:29.612878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.967 [2024-11-27 00:39:29.613118] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:52.967 [2024-11-27 00:39:29.613130] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.967 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:53.226 [2024-11-27 00:39:29.796945] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.226 [2024-11-27 00:39:29.798196] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:53.226 [2024-11-27 00:39:29.798227] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.226 00:39:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:53.484 ************************************ 00:16:53.484 END TEST test_create_multi_ublk 00:16:53.484 ************************************ 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:53.484 00:16:53.484 real 0m1.981s 00:16:53.484 user 0m0.807s 00:16:53.484 sys 0m0.146s 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.484 00:39:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 00:39:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:53.484 00:39:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:53.484 00:39:30 ublk -- ublk/ublk.sh@130 -- # killprocess 85163 00:16:53.484 00:39:30 ublk -- common/autotest_common.sh@954 -- # '[' -z 85163 ']' 00:16:53.484 00:39:30 ublk -- common/autotest_common.sh@958 -- # kill -0 85163 00:16:53.484 00:39:30 ublk -- common/autotest_common.sh@959 -- # uname 00:16:53.484 00:39:30 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.484 00:39:30 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85163 00:16:53.484 killing process with pid 85163 00:16:53.485 00:39:30 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.485 00:39:30 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.485 00:39:30 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85163' 00:16:53.485 00:39:30 ublk -- common/autotest_common.sh@973 -- # kill 85163 00:16:53.485 00:39:30 ublk -- common/autotest_common.sh@978 -- # wait 85163 00:16:53.743 [2024-11-27 00:39:30.366608] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.743 [2024-11-27 00:39:30.366667] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:54.004 00:16:54.004 real 0m18.502s 00:16:54.004 user 0m28.578s 00:16:54.004 sys 0m7.419s 00:16:54.004 ************************************ 00:16:54.004 END TEST ublk 00:16:54.004 ************************************ 00:16:54.004 00:39:30 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.004 00:39:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.004 00:39:30 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:54.004 00:39:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.004 00:39:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.004 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:16:54.004 ************************************ 00:16:54.004 START TEST ublk_recovery 00:16:54.004 ************************************ 00:16:54.004 00:39:30 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:54.004 * Looking for test storage... 00:16:54.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:54.004 00:39:30 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:54.004 00:39:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:54.004 00:39:30 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.265 00:39:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.265 00:39:30 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.266 00:39:30 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.266 --rc genhtml_branch_coverage=1 00:16:54.266 --rc genhtml_function_coverage=1 00:16:54.266 --rc genhtml_legend=1 00:16:54.266 --rc geninfo_all_blocks=1 00:16:54.266 --rc geninfo_unexecuted_blocks=1 00:16:54.266 00:16:54.266 ' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.266 --rc genhtml_branch_coverage=1 00:16:54.266 --rc genhtml_function_coverage=1 00:16:54.266 --rc genhtml_legend=1 00:16:54.266 --rc geninfo_all_blocks=1 00:16:54.266 --rc geninfo_unexecuted_blocks=1 00:16:54.266 00:16:54.266 ' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.266 --rc genhtml_branch_coverage=1 00:16:54.266 --rc genhtml_function_coverage=1 00:16:54.266 --rc genhtml_legend=1 00:16:54.266 --rc geninfo_all_blocks=1 00:16:54.266 --rc geninfo_unexecuted_blocks=1 00:16:54.266 00:16:54.266 ' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.266 --rc genhtml_branch_coverage=1 00:16:54.266 --rc genhtml_function_coverage=1 00:16:54.266 --rc genhtml_legend=1 00:16:54.266 --rc geninfo_all_blocks=1 00:16:54.266 --rc geninfo_unexecuted_blocks=1 00:16:54.266 00:16:54.266 ' 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:54.266 00:39:30 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:54.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85531 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85531 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85531 ']' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.266 00:39:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.266 00:39:30 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:54.266 [2024-11-27 00:39:30.928479] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:16:54.266 [2024-11-27 00:39:30.928620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85531 ] 00:16:54.525 [2024-11-27 00:39:31.086289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.525 [2024-11-27 00:39:31.104520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.525 [2024-11-27 00:39:31.104556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:55.092 00:39:31 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.092 [2024-11-27 00:39:31.761871] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:55.092 [2024-11-27 00:39:31.762812] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.092 00:39:31 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.092 malloc0 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.092 00:39:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:55.092 [2024-11-27 00:39:31.793989] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:55.092 [2024-11-27 00:39:31.794086] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:55.092 [2024-11-27 00:39:31.794094] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:55.092 [2024-11-27 00:39:31.794101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.092 [2024-11-27 00:39:31.802952] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.092 [2024-11-27 00:39:31.802974] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.092 [2024-11-27 00:39:31.809882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.092 [2024-11-27 00:39:31.809998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:55.092 [2024-11-27 00:39:31.824895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.092 1 00:16:55.092 00:39:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.092 00:39:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:56.468 00:39:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85563 00:16:56.468 00:39:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:56.468 00:39:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:56.468 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.468 fio-3.35 00:16:56.468 Starting 1 process 00:17:01.739 00:39:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85531 00:17:01.739 00:39:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:07.022 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85531 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:07.022 00:39:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85676 00:17:07.022 00:39:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:07.022 00:39:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:07.022 00:39:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85676 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85676 ']' 00:17:07.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.022 00:39:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.022 [2024-11-27 00:39:42.922751] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:17:07.022 [2024-11-27 00:39:42.922881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85676 ] 00:17:07.022 [2024-11-27 00:39:43.076065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:07.022 [2024-11-27 00:39:43.094109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.022 [2024-11-27 00:39:43.094200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:07.022 00:39:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.022 [2024-11-27 00:39:43.721870] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:07.022 [2024-11-27 00:39:43.722804] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.022 00:39:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.022 malloc0 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.022 00:39:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.022 [2024-11-27 00:39:43.754000] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:07.022 [2024-11-27 00:39:43.754029] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:07.022 [2024-11-27 00:39:43.754036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:07.022 [2024-11-27 00:39:43.761908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:07.022 [2024-11-27 00:39:43.761925] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:07.022 1 00:17:07.022 00:39:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.022 00:39:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85563 00:17:08.395 [2024-11-27 00:39:44.761956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:08.395 [2024-11-27 00:39:44.769880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:08.395 [2024-11-27 00:39:44.769901] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:09.329 [2024-11-27 00:39:45.769919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:09.329 [2024-11-27 00:39:45.773888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:09.329 [2024-11-27 00:39:45.773900] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:10.263 [2024-11-27 00:39:46.773924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:10.263 [2024-11-27 00:39:46.781880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:10.263 [2024-11-27 00:39:46.781895] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:10.263 [2024-11-27 00:39:46.781901] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:10.263 [2024-11-27 00:39:46.781973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:32.189 [2024-11-27 00:40:08.167875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:32.189 [2024-11-27 00:40:08.174511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:32.189 [2024-11-27 00:40:08.182052] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:32.189 [2024-11-27 00:40:08.182143] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:58.730 00:17:58.730 fio_test: (groupid=0, jobs=1): err= 0: pid=85567: Wed Nov 27 00:40:33 2024 00:17:58.730 read: IOPS=14.9k, BW=58.4MiB/s (61.2MB/s)(3502MiB/60002msec) 00:17:58.730 slat (nsec): min=959, max=293641, avg=4815.37, stdev=1459.90 00:17:58.730 clat (usec): min=586, max=30348k, avg=4106.15, stdev=248237.18 00:17:58.730 lat (usec): min=590, max=30348k, avg=4110.97, stdev=248237.18 00:17:58.730 clat percentiles (usec): 00:17:58.730 | 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1860], 20.00th=[ 1893], 00:17:58.730 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1926], 60.00th=[ 1942], 00:17:58.730 | 70.00th=[ 1958], 80.00th=[ 1975], 90.00th=[ 2024], 95.00th=[ 3130], 00:17:58.730 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 8356], 99.95th=[12649], 00:17:58.730 | 99.99th=[13566] 00:17:58.730 bw ( KiB/s): min=44592, max=126904, per=100.00%, avg=119563.25, stdev=16719.15, samples=59 00:17:58.730 iops : min=11148, max=31726, avg=29890.81, stdev=4179.79, samples=59 00:17:58.730 write: IOPS=14.9k, BW=58.3MiB/s (61.1MB/s)(3497MiB/60002msec); 0 zone resets 00:17:58.730 slat (nsec): min=937, max=192307, avg=4839.50, stdev=1396.69 00:17:58.730 clat (usec): min=602, max=30348k, avg=4456.54, stdev=264460.47 00:17:58.730 lat (usec): min=606, max=30348k, avg=4461.38, stdev=264460.47 00:17:58.731 clat percentiles (usec): 00:17:58.731 | 1.00th=[ 1778], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:17:58.731 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:17:58.731 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 3097], 00:17:58.731 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 8455], 99.95th=[12649], 00:17:58.731 | 99.99th=[13435] 00:17:58.731 bw ( KiB/s): min=44128, max=126984, per=100.00%, avg=119384.81, stdev=16639.05, samples=59 00:17:58.731 iops : min=11032, max=31746, avg=29846.20, stdev=4159.76, samples=59 00:17:58.731 lat (usec) : 750=0.01%, 1000=0.01% 00:17:58.731 lat (msec) : 2=61.42%, 4=35.48%, 10=3.01%, 20=0.07%, >=2000=0.01% 00:17:58.731 cpu : usr=3.28%, sys=14.60%, ctx=59400, majf=0, minf=15 00:17:58.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:58.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:58.731 issued rwts: total=896500,895200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:58.731 00:17:58.731 Run status group 0 (all jobs): 00:17:58.731 READ: bw=58.4MiB/s (61.2MB/s), 58.4MiB/s-58.4MiB/s (61.2MB/s-61.2MB/s), io=3502MiB (3672MB), run=60002-60002msec 00:17:58.731 WRITE: bw=58.3MiB/s (61.1MB/s), 58.3MiB/s-58.3MiB/s (61.1MB/s-61.1MB/s), io=3497MiB (3667MB), run=60002-60002msec 00:17:58.731 00:17:58.731 Disk stats (read/write): 00:17:58.731 ublkb1: ios=893083/891823, merge=0/0, ticks=3631251/3867946, in_queue=7499197, util=99.88% 00:17:58.731 00:40:33 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-11-27 00:40:33.086716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:58.731 [2024-11-27 00:40:33.128903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:58.731 [2024-11-27 00:40:33.129129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:58.731 [2024-11-27 00:40:33.136889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:58.731 [2024-11-27 00:40:33.137056] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:58.731 [2024-11-27 00:40:33.137130] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.731 00:40:33 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 [2024-11-27 00:40:33.149945] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:58.731 [2024-11-27 00:40:33.151161] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:58.731 [2024-11-27 00:40:33.151191] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.731 00:40:33 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:58.731 00:40:33 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:58.731 00:40:33 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85676 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85676 ']' 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85676 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85676 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.731 killing process with pid 85676 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85676' 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85676 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85676 00:17:58.731 [2024-11-27 00:40:33.346934] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:58.731 [2024-11-27 00:40:33.346981] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:58.731 ************************************ 00:17:58.731 END TEST ublk_recovery 00:17:58.731 ************************************ 00:17:58.731 00:17:58.731 real 1m2.935s 00:17:58.731 user 1m45.958s 00:17:58.731 sys 0m20.112s 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.731 00:40:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 00:40:33 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:58.731 00:40:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:58.731 00:40:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.731 00:40:33 -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 00:40:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:58.731 00:40:33 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.731 00:40:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.731 00:40:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.731 00:40:33 -- common/autotest_common.sh@10 -- # set +x 00:17:58.731 ************************************ 00:17:58.731 START TEST ftl 00:17:58.731 ************************************ 00:17:58.731 00:40:33 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.731 * Looking for test storage... 00:17:58.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.731 00:40:33 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.731 00:40:33 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.731 00:40:33 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.731 00:40:33 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.731 00:40:33 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.731 00:40:33 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.731 00:40:33 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.731 00:40:33 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.731 00:40:33 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.731 00:40:33 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.731 00:40:33 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.731 00:40:33 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.731 00:40:33 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.731 00:40:33 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.731 00:40:33 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.731 00:40:33 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:58.731 00:40:33 ftl -- scripts/common.sh@345 -- # : 1 00:17:58.732 00:40:33 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.732 00:40:33 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.732 00:40:33 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:58.732 00:40:33 ftl -- scripts/common.sh@353 -- # local d=1 00:17:58.732 00:40:33 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.732 00:40:33 ftl -- scripts/common.sh@355 -- # echo 1 00:17:58.732 00:40:33 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.732 00:40:33 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:58.732 00:40:33 ftl -- scripts/common.sh@353 -- # local d=2 00:17:58.732 00:40:33 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.732 00:40:33 ftl -- scripts/common.sh@355 -- # echo 2 00:17:58.732 00:40:33 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.732 00:40:33 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.732 00:40:33 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.732 00:40:33 ftl -- scripts/common.sh@368 -- # return 0 00:17:58.732 00:40:33 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.732 00:40:33 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.732 --rc genhtml_branch_coverage=1 00:17:58.732 --rc genhtml_function_coverage=1 00:17:58.732 --rc genhtml_legend=1 00:17:58.732 --rc geninfo_all_blocks=1 00:17:58.732 --rc geninfo_unexecuted_blocks=1 00:17:58.732 00:17:58.732 ' 00:17:58.732 00:40:33 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.732 --rc genhtml_branch_coverage=1 00:17:58.732 --rc genhtml_function_coverage=1 00:17:58.732 --rc genhtml_legend=1 00:17:58.732 --rc geninfo_all_blocks=1 00:17:58.732 --rc geninfo_unexecuted_blocks=1 00:17:58.732 00:17:58.732 ' 00:17:58.732 00:40:33 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.732 --rc genhtml_branch_coverage=1 00:17:58.732 --rc genhtml_function_coverage=1 00:17:58.732 --rc genhtml_legend=1 00:17:58.732 --rc geninfo_all_blocks=1 00:17:58.732 --rc geninfo_unexecuted_blocks=1 00:17:58.732 00:17:58.732 ' 00:17:58.732 00:40:33 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.732 --rc genhtml_branch_coverage=1 00:17:58.732 --rc genhtml_function_coverage=1 00:17:58.732 --rc genhtml_legend=1 00:17:58.732 --rc geninfo_all_blocks=1 00:17:58.732 --rc geninfo_unexecuted_blocks=1 00:17:58.732 00:17:58.732 ' 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:58.732 00:40:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:58.732 00:40:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.732 00:40:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.732 00:40:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:58.732 00:40:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:58.732 00:40:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.732 00:40:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.732 00:40:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.732 00:40:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.732 00:40:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:58.732 00:40:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:58.732 00:40:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:58.732 00:40:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.732 00:40:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:58.732 00:40:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:58.732 00:40:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.732 00:40:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:58.732 00:40:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.732 00:40:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:58.732 00:40:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:58.732 00:40:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:58.732 00:40:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.732 00:40:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:58.732 00:40:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:58.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:58.732 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:58.732 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:58.732 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:58.732 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:58.732 00:40:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86472 00:17:58.732 00:40:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86472 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@835 -- # '[' -z 86472 ']' 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.732 00:40:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:58.732 00:40:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:58.732 [2024-11-27 00:40:34.476563] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:17:58.732 [2024-11-27 00:40:34.476716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86472 ] 00:17:58.732 [2024-11-27 00:40:34.641030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.732 [2024-11-27 00:40:34.670704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.732 00:40:35 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.732 00:40:35 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:58.732 00:40:35 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:58.995 00:40:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:59.256 00:40:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:59.256 00:40:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@50 -- # break 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:59.893 00:40:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:00.164 00:40:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:00.164 00:40:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:00.164 00:40:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:00.164 00:40:36 ftl -- ftl/ftl.sh@63 -- # break 00:18:00.164 00:40:36 ftl -- ftl/ftl.sh@66 -- # killprocess 86472 00:18:00.164 00:40:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 86472 ']' 00:18:00.164 00:40:36 ftl -- common/autotest_common.sh@958 -- # kill -0 86472 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@959 -- # uname 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86472 00:18:00.165 killing process with pid 86472 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86472' 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@973 -- # kill 86472 00:18:00.165 00:40:36 ftl -- common/autotest_common.sh@978 -- # wait 86472 00:18:00.425 00:40:37 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:00.425 00:40:37 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:00.425 00:40:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:00.425 00:40:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.425 00:40:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:00.685 ************************************ 00:18:00.685 START TEST ftl_fio_basic 00:18:00.685 ************************************ 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:00.685 * Looking for test storage... 00:18:00.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.685 --rc genhtml_branch_coverage=1 00:18:00.685 --rc genhtml_function_coverage=1 00:18:00.685 --rc genhtml_legend=1 00:18:00.685 --rc geninfo_all_blocks=1 00:18:00.685 --rc geninfo_unexecuted_blocks=1 00:18:00.685 00:18:00.685 ' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.685 --rc genhtml_branch_coverage=1 00:18:00.685 --rc genhtml_function_coverage=1 00:18:00.685 --rc genhtml_legend=1 00:18:00.685 --rc geninfo_all_blocks=1 00:18:00.685 --rc geninfo_unexecuted_blocks=1 00:18:00.685 00:18:00.685 ' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.685 --rc genhtml_branch_coverage=1 00:18:00.685 --rc genhtml_function_coverage=1 00:18:00.685 --rc genhtml_legend=1 00:18:00.685 --rc geninfo_all_blocks=1 00:18:00.685 --rc geninfo_unexecuted_blocks=1 00:18:00.685 00:18:00.685 ' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.685 --rc genhtml_branch_coverage=1 00:18:00.685 --rc genhtml_function_coverage=1 00:18:00.685 --rc genhtml_legend=1 00:18:00.685 --rc geninfo_all_blocks=1 00:18:00.685 --rc geninfo_unexecuted_blocks=1 00:18:00.685 00:18:00.685 ' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:00.685 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86588 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86588 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86588 ']' 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.686 00:40:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:00.945 [2024-11-27 00:40:37.478834] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:18:00.945 [2024-11-27 00:40:37.479250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86588 ] 00:18:00.945 [2024-11-27 00:40:37.634764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:00.945 [2024-11-27 00:40:37.666470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.945 [2024-11-27 00:40:37.666781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.945 [2024-11-27 00:40:37.666842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:01.884 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:01.885 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:01.885 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:01.885 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:02.223 { 00:18:02.223 "name": "nvme0n1", 00:18:02.223 "aliases": [ 00:18:02.223 "645d497d-586c-407a-8fc7-c06c327db234" 00:18:02.223 ], 00:18:02.223 "product_name": "NVMe disk", 00:18:02.223 "block_size": 4096, 00:18:02.223 "num_blocks": 1310720, 00:18:02.223 "uuid": "645d497d-586c-407a-8fc7-c06c327db234", 00:18:02.223 "numa_id": -1, 00:18:02.223 "assigned_rate_limits": { 00:18:02.223 "rw_ios_per_sec": 0, 00:18:02.223 "rw_mbytes_per_sec": 0, 00:18:02.223 "r_mbytes_per_sec": 0, 00:18:02.223 "w_mbytes_per_sec": 0 00:18:02.223 }, 00:18:02.223 "claimed": false, 00:18:02.223 "zoned": false, 00:18:02.223 "supported_io_types": { 00:18:02.223 "read": true, 00:18:02.223 "write": true, 00:18:02.223 "unmap": true, 00:18:02.223 "flush": true, 00:18:02.223 "reset": true, 00:18:02.223 "nvme_admin": true, 00:18:02.223 "nvme_io": true, 00:18:02.223 "nvme_io_md": false, 00:18:02.223 "write_zeroes": true, 00:18:02.223 "zcopy": false, 00:18:02.223 "get_zone_info": false, 00:18:02.223 "zone_management": false, 00:18:02.223 "zone_append": false, 00:18:02.223 "compare": true, 00:18:02.223 "compare_and_write": false, 00:18:02.223 "abort": true, 00:18:02.223 "seek_hole": false, 00:18:02.223 "seek_data": false, 00:18:02.223 "copy": true, 00:18:02.223 "nvme_iov_md": false 00:18:02.223 }, 00:18:02.223 "driver_specific": { 00:18:02.223 "nvme": [ 00:18:02.223 { 00:18:02.223 "pci_address": "0000:00:11.0", 00:18:02.223 "trid": { 00:18:02.223 "trtype": "PCIe", 00:18:02.223 "traddr": "0000:00:11.0" 00:18:02.223 }, 00:18:02.223 "ctrlr_data": { 00:18:02.223 "cntlid": 0, 00:18:02.223 "vendor_id": "0x1b36", 00:18:02.223 "model_number": "QEMU NVMe Ctrl", 00:18:02.223 "serial_number": "12341", 00:18:02.223 "firmware_revision": "8.0.0", 00:18:02.223 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:02.223 "oacs": { 00:18:02.223 "security": 0, 00:18:02.223 "format": 1, 00:18:02.223 "firmware": 0, 00:18:02.223 "ns_manage": 1 00:18:02.223 }, 00:18:02.223 "multi_ctrlr": false, 00:18:02.223 "ana_reporting": false 00:18:02.223 }, 00:18:02.223 "vs": { 00:18:02.223 "nvme_version": "1.4" 00:18:02.223 }, 00:18:02.223 "ns_data": { 00:18:02.223 "id": 1, 00:18:02.223 "can_share": false 00:18:02.223 } 00:18:02.223 } 00:18:02.223 ], 00:18:02.223 "mp_policy": "active_passive" 00:18:02.223 } 00:18:02.223 } 00:18:02.223 ]' 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:02.223 00:40:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:02.484 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:02.484 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:02.484 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=875c4723-5aef-40cd-ac8c-d21c1ef89334 00:18:02.484 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 875c4723-5aef-40cd-ac8c-d21c1ef89334 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:02.744 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.004 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:03.004 { 00:18:03.004 "name": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:03.004 "aliases": [ 00:18:03.004 "lvs/nvme0n1p0" 00:18:03.004 ], 00:18:03.005 "product_name": "Logical Volume", 00:18:03.005 "block_size": 4096, 00:18:03.005 "num_blocks": 26476544, 00:18:03.005 "uuid": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:03.005 "assigned_rate_limits": { 00:18:03.005 "rw_ios_per_sec": 0, 00:18:03.005 "rw_mbytes_per_sec": 0, 00:18:03.005 "r_mbytes_per_sec": 0, 00:18:03.005 "w_mbytes_per_sec": 0 00:18:03.005 }, 00:18:03.005 "claimed": false, 00:18:03.005 "zoned": false, 00:18:03.005 "supported_io_types": { 00:18:03.005 "read": true, 00:18:03.005 "write": true, 00:18:03.005 "unmap": true, 00:18:03.005 "flush": false, 00:18:03.005 "reset": true, 00:18:03.005 "nvme_admin": false, 00:18:03.005 "nvme_io": false, 00:18:03.005 "nvme_io_md": false, 00:18:03.005 "write_zeroes": true, 00:18:03.005 "zcopy": false, 00:18:03.005 "get_zone_info": false, 00:18:03.005 "zone_management": false, 00:18:03.005 "zone_append": false, 00:18:03.005 "compare": false, 00:18:03.005 "compare_and_write": false, 00:18:03.005 "abort": false, 00:18:03.005 "seek_hole": true, 00:18:03.005 "seek_data": true, 00:18:03.005 "copy": false, 00:18:03.005 "nvme_iov_md": false 00:18:03.005 }, 00:18:03.005 "driver_specific": { 00:18:03.005 "lvol": { 00:18:03.005 "lvol_store_uuid": "875c4723-5aef-40cd-ac8c-d21c1ef89334", 00:18:03.005 "base_bdev": "nvme0n1", 00:18:03.005 "thin_provision": true, 00:18:03.005 "num_allocated_clusters": 0, 00:18:03.005 "snapshot": false, 00:18:03.005 "clone": false, 00:18:03.005 "esnap_clone": false 00:18:03.005 } 00:18:03.005 } 00:18:03.005 } 00:18:03.005 ]' 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:03.005 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:03.266 00:40:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:03.526 { 00:18:03.526 "name": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:03.526 "aliases": [ 00:18:03.526 "lvs/nvme0n1p0" 00:18:03.526 ], 00:18:03.526 "product_name": "Logical Volume", 00:18:03.526 "block_size": 4096, 00:18:03.526 "num_blocks": 26476544, 00:18:03.526 "uuid": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:03.526 "assigned_rate_limits": { 00:18:03.526 "rw_ios_per_sec": 0, 00:18:03.526 "rw_mbytes_per_sec": 0, 00:18:03.526 "r_mbytes_per_sec": 0, 00:18:03.526 "w_mbytes_per_sec": 0 00:18:03.526 }, 00:18:03.526 "claimed": false, 00:18:03.526 "zoned": false, 00:18:03.526 "supported_io_types": { 00:18:03.526 "read": true, 00:18:03.526 "write": true, 00:18:03.526 "unmap": true, 00:18:03.526 "flush": false, 00:18:03.526 "reset": true, 00:18:03.526 "nvme_admin": false, 00:18:03.526 "nvme_io": false, 00:18:03.526 "nvme_io_md": false, 00:18:03.526 "write_zeroes": true, 00:18:03.526 "zcopy": false, 00:18:03.526 "get_zone_info": false, 00:18:03.526 "zone_management": false, 00:18:03.526 "zone_append": false, 00:18:03.526 "compare": false, 00:18:03.526 "compare_and_write": false, 00:18:03.526 "abort": false, 00:18:03.526 "seek_hole": true, 00:18:03.526 "seek_data": true, 00:18:03.526 "copy": false, 00:18:03.526 "nvme_iov_md": false 00:18:03.526 }, 00:18:03.526 "driver_specific": { 00:18:03.526 "lvol": { 00:18:03.526 "lvol_store_uuid": "875c4723-5aef-40cd-ac8c-d21c1ef89334", 00:18:03.526 "base_bdev": "nvme0n1", 00:18:03.526 "thin_provision": true, 00:18:03.526 "num_allocated_clusters": 0, 00:18:03.526 "snapshot": false, 00:18:03.526 "clone": false, 00:18:03.526 "esnap_clone": false 00:18:03.526 } 00:18:03.526 } 00:18:03.526 } 00:18:03.526 ]' 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:03.526 00:40:40 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:03.787 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:03.787 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57453caf-a8fd-49af-8aeb-baec7e57f758 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:04.049 { 00:18:04.049 "name": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:04.049 "aliases": [ 00:18:04.049 "lvs/nvme0n1p0" 00:18:04.049 ], 00:18:04.049 "product_name": "Logical Volume", 00:18:04.049 "block_size": 4096, 00:18:04.049 "num_blocks": 26476544, 00:18:04.049 "uuid": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:04.049 "assigned_rate_limits": { 00:18:04.049 "rw_ios_per_sec": 0, 00:18:04.049 "rw_mbytes_per_sec": 0, 00:18:04.049 "r_mbytes_per_sec": 0, 00:18:04.049 "w_mbytes_per_sec": 0 00:18:04.049 }, 00:18:04.049 "claimed": false, 00:18:04.049 "zoned": false, 00:18:04.049 "supported_io_types": { 00:18:04.049 "read": true, 00:18:04.049 "write": true, 00:18:04.049 "unmap": true, 00:18:04.049 "flush": false, 00:18:04.049 "reset": true, 00:18:04.049 "nvme_admin": false, 00:18:04.049 "nvme_io": false, 00:18:04.049 "nvme_io_md": false, 00:18:04.049 "write_zeroes": true, 00:18:04.049 "zcopy": false, 00:18:04.049 "get_zone_info": false, 00:18:04.049 "zone_management": false, 00:18:04.049 "zone_append": false, 00:18:04.049 "compare": false, 00:18:04.049 "compare_and_write": false, 00:18:04.049 "abort": false, 00:18:04.049 "seek_hole": true, 00:18:04.049 "seek_data": true, 00:18:04.049 "copy": false, 00:18:04.049 "nvme_iov_md": false 00:18:04.049 }, 00:18:04.049 "driver_specific": { 00:18:04.049 "lvol": { 00:18:04.049 "lvol_store_uuid": "875c4723-5aef-40cd-ac8c-d21c1ef89334", 00:18:04.049 "base_bdev": "nvme0n1", 00:18:04.049 "thin_provision": true, 00:18:04.049 "num_allocated_clusters": 0, 00:18:04.049 "snapshot": false, 00:18:04.049 "clone": false, 00:18:04.049 "esnap_clone": false 00:18:04.049 } 00:18:04.049 } 00:18:04.049 } 00:18:04.049 ]' 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:04.049 00:40:40 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 57453caf-a8fd-49af-8aeb-baec7e57f758 -c nvc0n1p0 --l2p_dram_limit 60 00:18:04.311 [2024-11-27 00:40:40.872156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.872206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:04.311 [2024-11-27 00:40:40.872221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:04.311 [2024-11-27 00:40:40.872231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.872303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.872315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.311 [2024-11-27 00:40:40.872334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:04.311 [2024-11-27 00:40:40.872345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.872393] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:04.311 [2024-11-27 00:40:40.872667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:04.311 [2024-11-27 00:40:40.872691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.872701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.311 [2024-11-27 00:40:40.872710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:04.311 [2024-11-27 00:40:40.872720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.872768] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6276fac1-0c2c-445b-938e-fba4aa5c4d95 00:18:04.311 [2024-11-27 00:40:40.874189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.874318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:04.311 [2024-11-27 00:40:40.874338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:04.311 [2024-11-27 00:40:40.874346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.881527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.881651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.311 [2024-11-27 00:40:40.881671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.066 ms 00:18:04.311 [2024-11-27 00:40:40.881679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.881781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.881792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.311 [2024-11-27 00:40:40.881815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:04.311 [2024-11-27 00:40:40.881823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.881902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.311 [2024-11-27 00:40:40.881913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:04.311 [2024-11-27 00:40:40.881925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:04.311 [2024-11-27 00:40:40.881935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.311 [2024-11-27 00:40:40.881974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:04.311 [2024-11-27 00:40:40.883765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.312 [2024-11-27 00:40:40.883800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.312 [2024-11-27 00:40:40.883820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:18:04.312 [2024-11-27 00:40:40.883830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.312 [2024-11-27 00:40:40.883894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.312 [2024-11-27 00:40:40.883907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:04.312 [2024-11-27 00:40:40.883916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:04.312 [2024-11-27 00:40:40.883930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.312 [2024-11-27 00:40:40.883954] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:04.312 [2024-11-27 00:40:40.884114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:04.312 [2024-11-27 00:40:40.884133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:04.312 [2024-11-27 00:40:40.884147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:04.312 [2024-11-27 00:40:40.884159] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884170] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:04.312 [2024-11-27 00:40:40.884191] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:04.312 [2024-11-27 00:40:40.884198] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:04.312 [2024-11-27 00:40:40.884207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:04.312 [2024-11-27 00:40:40.884216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.312 [2024-11-27 00:40:40.884225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:04.312 [2024-11-27 00:40:40.884233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:04.312 [2024-11-27 00:40:40.884243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.312 [2024-11-27 00:40:40.884338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.312 [2024-11-27 00:40:40.884353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:04.312 [2024-11-27 00:40:40.884361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:04.312 [2024-11-27 00:40:40.884381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.312 [2024-11-27 00:40:40.884496] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:04.312 [2024-11-27 00:40:40.884508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:04.312 [2024-11-27 00:40:40.884518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:04.312 [2024-11-27 00:40:40.884548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:04.312 [2024-11-27 00:40:40.884573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:04.312 [2024-11-27 00:40:40.884592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:04.312 [2024-11-27 00:40:40.884602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:04.312 [2024-11-27 00:40:40.884609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:04.312 [2024-11-27 00:40:40.884622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:04.312 [2024-11-27 00:40:40.884648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:04.312 [2024-11-27 00:40:40.884660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:04.312 [2024-11-27 00:40:40.884677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:04.312 [2024-11-27 00:40:40.884704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:04.312 [2024-11-27 00:40:40.884731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:04.312 [2024-11-27 00:40:40.884755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:04.312 [2024-11-27 00:40:40.884785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:04.312 [2024-11-27 00:40:40.884805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:04.312 [2024-11-27 00:40:40.884812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:04.312 [2024-11-27 00:40:40.884821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:04.312 [2024-11-27 00:40:40.884828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:04.312 [2024-11-27 00:40:40.884837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:04.312 [2024-11-27 00:40:40.884843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:04.312 [2024-11-27 00:40:40.885016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:04.312 [2024-11-27 00:40:40.885050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:04.312 [2024-11-27 00:40:40.885073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.885092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:04.312 [2024-11-27 00:40:40.885113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:04.312 [2024-11-27 00:40:40.885131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.885151] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:04.312 [2024-11-27 00:40:40.885229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:04.312 [2024-11-27 00:40:40.885271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:04.312 [2024-11-27 00:40:40.885292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:04.312 [2024-11-27 00:40:40.885313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:04.312 [2024-11-27 00:40:40.885332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:04.312 [2024-11-27 00:40:40.885352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:04.312 [2024-11-27 00:40:40.885370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:04.312 [2024-11-27 00:40:40.885434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:04.312 [2024-11-27 00:40:40.885457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:04.312 [2024-11-27 00:40:40.885482] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:04.312 [2024-11-27 00:40:40.885513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:04.312 [2024-11-27 00:40:40.885545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:04.312 [2024-11-27 00:40:40.885575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:04.312 [2024-11-27 00:40:40.885648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:04.312 [2024-11-27 00:40:40.885678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:04.312 [2024-11-27 00:40:40.885709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:04.312 [2024-11-27 00:40:40.885738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:04.312 [2024-11-27 00:40:40.885890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:04.312 [2024-11-27 00:40:40.885922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:04.312 [2024-11-27 00:40:40.885953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:04.312 [2024-11-27 00:40:40.886105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:04.312 [2024-11-27 00:40:40.886117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:04.312 [2024-11-27 00:40:40.886125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:04.312 [2024-11-27 00:40:40.886135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:04.312 [2024-11-27 00:40:40.886143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:04.313 [2024-11-27 00:40:40.886152] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:04.313 [2024-11-27 00:40:40.886180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:04.313 [2024-11-27 00:40:40.886191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:04.313 [2024-11-27 00:40:40.886199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:04.313 [2024-11-27 00:40:40.886209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:04.313 [2024-11-27 00:40:40.886216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:04.313 [2024-11-27 00:40:40.886227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.313 [2024-11-27 00:40:40.886236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:04.313 [2024-11-27 00:40:40.886248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:18:04.313 [2024-11-27 00:40:40.886256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.313 [2024-11-27 00:40:40.886348] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:04.313 [2024-11-27 00:40:40.886359] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:06.855 [2024-11-27 00:40:43.254997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.255070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:06.855 [2024-11-27 00:40:43.255098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2368.635 ms 00:18:06.855 [2024-11-27 00:40:43.255106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.266132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.266194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.855 [2024-11-27 00:40:43.266210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.910 ms 00:18:06.855 [2024-11-27 00:40:43.266219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.266337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.266348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:06.855 [2024-11-27 00:40:43.266358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:06.855 [2024-11-27 00:40:43.266366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.286805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.286873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.855 [2024-11-27 00:40:43.286888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.357 ms 00:18:06.855 [2024-11-27 00:40:43.286897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.286943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.286953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.855 [2024-11-27 00:40:43.286964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.855 [2024-11-27 00:40:43.286985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.287446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.287481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.855 [2024-11-27 00:40:43.287498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:18:06.855 [2024-11-27 00:40:43.287507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.287649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.287660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.855 [2024-11-27 00:40:43.287672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:06.855 [2024-11-27 00:40:43.287695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.294836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.855 [2024-11-27 00:40:43.294886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.855 [2024-11-27 00:40:43.294899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.107 ms 00:18:06.855 [2024-11-27 00:40:43.294922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.855 [2024-11-27 00:40:43.304134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:06.856 [2024-11-27 00:40:43.321428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.321644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:06.856 [2024-11-27 00:40:43.321661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.430 ms 00:18:06.856 [2024-11-27 00:40:43.321674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.367326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.367368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:06.856 [2024-11-27 00:40:43.367390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.620 ms 00:18:06.856 [2024-11-27 00:40:43.367403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.367594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.367608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:06.856 [2024-11-27 00:40:43.367616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:06.856 [2024-11-27 00:40:43.367627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.370894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.370931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:06.856 [2024-11-27 00:40:43.370941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:18:06.856 [2024-11-27 00:40:43.370951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.373638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.373673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:06.856 [2024-11-27 00:40:43.373683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.646 ms 00:18:06.856 [2024-11-27 00:40:43.373692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.374042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.374077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:06.856 [2024-11-27 00:40:43.374086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:18:06.856 [2024-11-27 00:40:43.374097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.401597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.401773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:06.856 [2024-11-27 00:40:43.401790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:18:06.856 [2024-11-27 00:40:43.401811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.406958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.407070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:06.856 [2024-11-27 00:40:43.407126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:18:06.856 [2024-11-27 00:40:43.407247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.410784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.410908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:06.856 [2024-11-27 00:40:43.410963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:18:06.856 [2024-11-27 00:40:43.411015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.415377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.415496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:06.856 [2024-11-27 00:40:43.415576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:18:06.856 [2024-11-27 00:40:43.415663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.415719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.415770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:06.856 [2024-11-27 00:40:43.415796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:06.856 [2024-11-27 00:40:43.415839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.415953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.856 [2024-11-27 00:40:43.416011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:06.856 [2024-11-27 00:40:43.416091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:06.856 [2024-11-27 00:40:43.416117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.856 [2024-11-27 00:40:43.417224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2544.583 ms, result 0 00:18:06.856 { 00:18:06.856 "name": "ftl0", 00:18:06.856 "uuid": "6276fac1-0c2c-445b-938e-fba4aa5c4d95" 00:18:06.856 } 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:06.856 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:07.116 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:07.116 [ 00:18:07.116 { 00:18:07.116 "name": "ftl0", 00:18:07.116 "aliases": [ 00:18:07.116 "6276fac1-0c2c-445b-938e-fba4aa5c4d95" 00:18:07.116 ], 00:18:07.116 "product_name": "FTL disk", 00:18:07.116 "block_size": 4096, 00:18:07.116 "num_blocks": 20971520, 00:18:07.116 "uuid": "6276fac1-0c2c-445b-938e-fba4aa5c4d95", 00:18:07.116 "assigned_rate_limits": { 00:18:07.116 "rw_ios_per_sec": 0, 00:18:07.116 "rw_mbytes_per_sec": 0, 00:18:07.116 "r_mbytes_per_sec": 0, 00:18:07.116 "w_mbytes_per_sec": 0 00:18:07.116 }, 00:18:07.116 "claimed": false, 00:18:07.116 "zoned": false, 00:18:07.116 "supported_io_types": { 00:18:07.116 "read": true, 00:18:07.116 "write": true, 00:18:07.116 "unmap": true, 00:18:07.116 "flush": true, 00:18:07.116 "reset": false, 00:18:07.116 "nvme_admin": false, 00:18:07.116 "nvme_io": false, 00:18:07.116 "nvme_io_md": false, 00:18:07.116 "write_zeroes": true, 00:18:07.116 "zcopy": false, 00:18:07.116 "get_zone_info": false, 00:18:07.116 "zone_management": false, 00:18:07.116 "zone_append": false, 00:18:07.117 "compare": false, 00:18:07.117 "compare_and_write": false, 00:18:07.117 "abort": false, 00:18:07.117 "seek_hole": false, 00:18:07.117 "seek_data": false, 00:18:07.117 "copy": false, 00:18:07.117 "nvme_iov_md": false 00:18:07.117 }, 00:18:07.117 "driver_specific": { 00:18:07.117 "ftl": { 00:18:07.117 "base_bdev": "57453caf-a8fd-49af-8aeb-baec7e57f758", 00:18:07.117 "cache": "nvc0n1p0" 00:18:07.117 } 00:18:07.117 } 00:18:07.117 } 00:18:07.117 ] 00:18:07.117 00:40:43 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:07.117 00:40:43 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:07.117 00:40:43 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:07.377 00:40:44 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:07.377 00:40:44 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:07.640 [2024-11-27 00:40:44.223427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.223556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:07.640 [2024-11-27 00:40:44.223576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.640 [2024-11-27 00:40:44.223585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.223623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.640 [2024-11-27 00:40:44.224218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.224253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:07.640 [2024-11-27 00:40:44.224263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:18:07.640 [2024-11-27 00:40:44.224287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.224732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.224755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:07.640 [2024-11-27 00:40:44.224765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:07.640 [2024-11-27 00:40:44.224775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.228042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.228074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:07.640 [2024-11-27 00:40:44.228083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:18:07.640 [2024-11-27 00:40:44.228095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.234251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.234292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:07.640 [2024-11-27 00:40:44.234302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.129 ms 00:18:07.640 [2024-11-27 00:40:44.234311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.236104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.236144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:07.640 [2024-11-27 00:40:44.236153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:18:07.640 [2024-11-27 00:40:44.236165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.240369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.240413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:07.640 [2024-11-27 00:40:44.240422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:18:07.640 [2024-11-27 00:40:44.240432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.240593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.240605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:07.640 [2024-11-27 00:40:44.240615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:07.640 [2024-11-27 00:40:44.240624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.242191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.242227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:07.640 [2024-11-27 00:40:44.242237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:18:07.640 [2024-11-27 00:40:44.242247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.243456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.243584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:07.640 [2024-11-27 00:40:44.243598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:18:07.640 [2024-11-27 00:40:44.243608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.245098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.640 [2024-11-27 00:40:44.245133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:07.640 [2024-11-27 00:40:44.245142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:18:07.640 [2024-11-27 00:40:44.245151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.640 [2024-11-27 00:40:44.246664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.641 [2024-11-27 00:40:44.246773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:07.641 [2024-11-27 00:40:44.246787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:18:07.641 [2024-11-27 00:40:44.246797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.641 [2024-11-27 00:40:44.246838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:07.641 [2024-11-27 00:40:44.246874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.246992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:07.641 [2024-11-27 00:40:44.247671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:07.642 [2024-11-27 00:40:44.247815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:07.642 [2024-11-27 00:40:44.247823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6276fac1-0c2c-445b-938e-fba4aa5c4d95 00:18:07.642 [2024-11-27 00:40:44.247833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:07.642 [2024-11-27 00:40:44.247840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:07.642 [2024-11-27 00:40:44.247850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:07.642 [2024-11-27 00:40:44.247869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:07.642 [2024-11-27 00:40:44.247878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:07.642 [2024-11-27 00:40:44.247886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:07.642 [2024-11-27 00:40:44.247895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:07.642 [2024-11-27 00:40:44.247901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:07.642 [2024-11-27 00:40:44.247910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:07.642 [2024-11-27 00:40:44.247917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.642 [2024-11-27 00:40:44.247927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:07.642 [2024-11-27 00:40:44.247935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:18:07.642 [2024-11-27 00:40:44.247948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.249753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.642 [2024-11-27 00:40:44.249782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:07.642 [2024-11-27 00:40:44.249791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:18:07.642 [2024-11-27 00:40:44.249801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.249899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.642 [2024-11-27 00:40:44.249923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:07.642 [2024-11-27 00:40:44.249934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:07.642 [2024-11-27 00:40:44.249943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.256538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.256573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.642 [2024-11-27 00:40:44.256583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.256603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.256662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.256674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.642 [2024-11-27 00:40:44.256684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.256694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.256781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.256796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.642 [2024-11-27 00:40:44.256804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.256824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.256848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.256896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.642 [2024-11-27 00:40:44.256904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.256915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.269078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.269119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.642 [2024-11-27 00:40:44.269128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.269138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.642 [2024-11-27 00:40:44.279113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.642 [2024-11-27 00:40:44.279252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.642 [2024-11-27 00:40:44.279359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.642 [2024-11-27 00:40:44.279481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:07.642 [2024-11-27 00:40:44.279563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.642 [2024-11-27 00:40:44.279642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:07.642 [2024-11-27 00:40:44.279724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.642 [2024-11-27 00:40:44.279734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:07.642 [2024-11-27 00:40:44.279744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.642 [2024-11-27 00:40:44.279940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.465 ms, result 0 00:18:07.642 true 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86588 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86588 ']' 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86588 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86588 00:18:07.642 killing process with pid 86588 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86588' 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86588 00:18:07.642 00:40:44 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86588 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.919 00:40:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:12.919 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:12.919 fio-3.35 00:18:12.919 Starting 1 thread 00:18:18.206 00:18:18.206 test: (groupid=0, jobs=1): err= 0: pid=86752: Wed Nov 27 00:40:54 2024 00:18:18.206 read: IOPS=881, BW=58.6MiB/s (61.4MB/s)(255MiB/4346msec) 00:18:18.206 slat (nsec): min=2929, max=44974, avg=5356.70, stdev=3194.82 00:18:18.206 clat (usec): min=267, max=1221, avg=513.82, stdev=173.98 00:18:18.206 lat (usec): min=271, max=1232, avg=519.18, stdev=175.06 00:18:18.206 clat percentiles (usec): 00:18:18.206 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 367], 00:18:18.206 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 519], 00:18:18.206 | 70.00th=[ 545], 80.00th=[ 603], 90.00th=[ 824], 95.00th=[ 898], 00:18:18.206 | 99.00th=[ 1004], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1221], 00:18:18.206 | 99.99th=[ 1221] 00:18:18.206 write: IOPS=888, BW=59.0MiB/s (61.9MB/s)(256MiB/4338msec); 0 zone resets 00:18:18.206 slat (nsec): min=13535, max=85838, avg=21670.01, stdev=7053.49 00:18:18.206 clat (usec): min=285, max=1859, avg=575.51, stdev=201.31 00:18:18.206 lat (usec): min=299, max=1881, avg=597.18, stdev=203.88 00:18:18.206 clat percentiles (usec): 00:18:18.206 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 412], 00:18:18.206 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 545], 60.00th=[ 594], 00:18:18.206 | 70.00th=[ 619], 80.00th=[ 685], 90.00th=[ 906], 95.00th=[ 979], 00:18:18.206 | 99.00th=[ 1205], 99.50th=[ 1270], 99.90th=[ 1598], 99.95th=[ 1844], 00:18:18.206 | 99.99th=[ 1860] 00:18:18.206 bw ( KiB/s): min=50184, max=75208, per=100.00%, avg=62356.00, stdev=10991.63, samples=8 00:18:18.206 iops : min= 738, max= 1106, avg=917.00, stdev=161.64, samples=8 00:18:18.206 lat (usec) : 500=49.46%, 750=36.31%, 1000=11.87% 00:18:18.206 lat (msec) : 2=2.35% 00:18:18.206 cpu : usr=99.13%, sys=0.12%, ctx=10, majf=0, minf=1181 00:18:18.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.206 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.206 00:18:18.206 Run status group 0 (all jobs): 00:18:18.206 READ: bw=58.6MiB/s (61.4MB/s), 58.6MiB/s-58.6MiB/s (61.4MB/s-61.4MB/s), io=255MiB (267MB), run=4346-4346msec 00:18:18.206 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=256MiB (269MB), run=4338-4338msec 00:18:18.780 ----------------------------------------------------- 00:18:18.780 Suppressions used: 00:18:18.780 count bytes template 00:18:18.780 1 5 /usr/src/fio/parse.c 00:18:18.780 1 8 libtcmalloc_minimal.so 00:18:18.780 1 904 libcrypto.so 00:18:18.780 ----------------------------------------------------- 00:18:18.780 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.780 00:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:18.780 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:18.780 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:18.780 fio-3.35 00:18:18.780 Starting 2 threads 00:18:45.333 00:18:45.333 first_half: (groupid=0, jobs=1): err= 0: pid=86849: Wed Nov 27 00:41:17 2024 00:18:45.333 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(256MiB/21502msec) 00:18:45.333 slat (nsec): min=3022, max=65462, avg=5286.65, stdev=1555.25 00:18:45.333 clat (usec): min=511, max=340665, avg=35803.82, stdev=22964.76 00:18:45.333 lat (usec): min=514, max=340675, avg=35809.10, stdev=22964.98 00:18:45.333 clat percentiles (msec): 00:18:45.333 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 30], 00:18:45.333 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:45.333 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 41], 95.00th=[ 68], 00:18:45.333 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 284], 99.95th=[ 326], 00:18:45.333 | 99.99th=[ 338] 00:18:45.333 write: IOPS=3051, BW=11.9MiB/s (12.5MB/s)(256MiB/21477msec); 0 zone resets 00:18:45.333 slat (usec): min=3, max=546, avg= 6.50, stdev= 3.64 00:18:45.333 clat (usec): min=309, max=47590, avg=6201.71, stdev=6439.61 00:18:45.333 lat (usec): min=314, max=47597, avg=6208.21, stdev=6439.90 00:18:45.333 clat percentiles (usec): 00:18:45.333 | 1.00th=[ 676], 5.00th=[ 889], 10.00th=[ 1106], 20.00th=[ 2343], 00:18:45.333 | 30.00th=[ 3064], 40.00th=[ 3785], 50.00th=[ 4424], 60.00th=[ 5014], 00:18:45.333 | 70.00th=[ 5604], 80.00th=[ 6915], 90.00th=[13698], 95.00th=[22414], 00:18:45.333 | 99.00th=[31589], 99.50th=[33817], 99.90th=[39060], 99.95th=[44303], 00:18:45.333 | 99.99th=[47449] 00:18:45.333 bw ( KiB/s): min= 752, max=58304, per=88.88%, avg=21698.00, stdev=16512.55, samples=24 00:18:45.333 iops : min= 188, max=14576, avg=5424.50, stdev=4128.14, samples=24 00:18:45.333 lat (usec) : 500=0.03%, 750=1.03%, 1000=2.76% 00:18:45.333 lat (msec) : 2=5.12%, 4=12.55%, 10=21.68%, 20=5.60%, 50=48.02% 00:18:45.333 lat (msec) : 100=1.53%, 250=1.63%, 500=0.06% 00:18:45.333 cpu : usr=99.23%, sys=0.14%, ctx=42, majf=0, minf=5565 00:18:45.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:45.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.333 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.333 issued rwts: total=65469,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.333 second_half: (groupid=0, jobs=1): err= 0: pid=86850: Wed Nov 27 00:41:17 2024 00:18:45.333 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(256MiB/21341msec) 00:18:45.333 slat (nsec): min=3031, max=42601, avg=4231.77, stdev=1465.40 00:18:45.333 clat (msec): min=9, max=304, avg=35.92, stdev=19.18 00:18:45.333 lat (msec): min=9, max=304, avg=35.92, stdev=19.18 00:18:45.333 clat percentiles (msec): 00:18:45.333 | 1.00th=[ 26], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:18:45.333 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:45.333 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 62], 00:18:45.333 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 176], 99.95th=[ 226], 00:18:45.333 | 99.99th=[ 288] 00:18:45.333 write: IOPS=3089, BW=12.1MiB/s (12.7MB/s)(256MiB/21214msec); 0 zone resets 00:18:45.333 slat (usec): min=3, max=1298, avg= 5.61, stdev= 7.29 00:18:45.333 clat (usec): min=331, max=43280, avg=5773.66, stdev=4367.74 00:18:45.333 lat (usec): min=338, max=43289, avg=5779.27, stdev=4368.27 00:18:45.333 clat percentiles (usec): 00:18:45.333 | 1.00th=[ 783], 5.00th=[ 1516], 10.00th=[ 2343], 20.00th=[ 2966], 00:18:45.333 | 30.00th=[ 3556], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5276], 00:18:45.333 | 70.00th=[ 5669], 80.00th=[ 6915], 90.00th=[11338], 95.00th=[13829], 00:18:45.333 | 99.00th=[24511], 99.50th=[30278], 99.90th=[37487], 99.95th=[38536], 00:18:45.333 | 99.99th=[43254] 00:18:45.333 bw ( KiB/s): min= 168, max=47568, per=100.00%, avg=28926.22, stdev=14373.13, samples=18 00:18:45.333 iops : min= 42, max=11892, avg=7231.56, stdev=3593.28, samples=18 00:18:45.333 lat (usec) : 500=0.03%, 750=0.34%, 1000=0.74% 00:18:45.333 lat (msec) : 2=2.42%, 4=14.72%, 10=24.76%, 20=6.36%, 50=47.41% 00:18:45.333 lat (msec) : 100=1.75%, 250=1.47%, 500=0.02% 00:18:45.333 cpu : usr=99.33%, sys=0.13%, ctx=40, majf=0, minf=5573 00:18:45.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:45.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.333 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.333 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.333 00:18:45.333 Run status group 0 (all jobs): 00:18:45.333 READ: bw=23.8MiB/s (24.9MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=512MiB (536MB), run=21341-21502msec 00:18:45.333 WRITE: bw=23.8MiB/s (25.0MB/s), 11.9MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=512MiB (537MB), run=21214-21477msec 00:18:45.333 ----------------------------------------------------- 00:18:45.333 Suppressions used: 00:18:45.333 count bytes template 00:18:45.333 2 10 /usr/src/fio/parse.c 00:18:45.333 3 288 /usr/src/fio/iolog.c 00:18:45.333 1 8 libtcmalloc_minimal.so 00:18:45.333 1 904 libcrypto.so 00:18:45.333 ----------------------------------------------------- 00:18:45.333 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:45.333 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:45.334 00:41:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:45.334 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:45.334 fio-3.35 00:18:45.334 Starting 1 thread 00:19:00.247 00:19:00.247 test: (groupid=0, jobs=1): err= 0: pid=87130: Wed Nov 27 00:41:34 2024 00:19:00.247 read: IOPS=7520, BW=29.4MiB/s (30.8MB/s)(255MiB/8670msec) 00:19:00.247 slat (nsec): min=3061, max=22209, avg=4809.97, stdev=1141.10 00:19:00.247 clat (usec): min=520, max=33278, avg=17011.56, stdev=2216.91 00:19:00.247 lat (usec): min=529, max=33285, avg=17016.37, stdev=2216.91 00:19:00.247 clat percentiles (usec): 00:19:00.247 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15270], 20.00th=[15401], 00:19:00.247 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:19:00.247 | 70.00th=[17171], 80.00th=[19006], 90.00th=[20317], 95.00th=[21365], 00:19:00.247 | 99.00th=[23725], 99.50th=[25035], 99.90th=[30016], 99.95th=[31065], 00:19:00.247 | 99.99th=[32113] 00:19:00.247 write: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(256MiB/5981msec); 0 zone resets 00:19:00.247 slat (usec): min=4, max=611, avg= 7.54, stdev= 4.21 00:19:00.247 clat (usec): min=478, max=58249, avg=11629.56, stdev=12385.65 00:19:00.247 lat (usec): min=484, max=58255, avg=11637.11, stdev=12385.69 00:19:00.247 clat percentiles (usec): 00:19:00.247 | 1.00th=[ 725], 5.00th=[ 930], 10.00th=[ 1074], 20.00th=[ 1254], 00:19:00.247 | 30.00th=[ 1418], 40.00th=[ 1876], 50.00th=[ 8717], 60.00th=[11338], 00:19:00.247 | 70.00th=[14484], 80.00th=[17171], 90.00th=[35914], 95.00th=[38536], 00:19:00.247 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44303], 99.95th=[46924], 00:19:00.247 | 99.99th=[56361] 00:19:00.247 bw ( KiB/s): min=32142, max=54248, per=99.63%, avg=43668.50, stdev=6763.19, samples=12 00:19:00.247 iops : min= 8035, max=13562, avg=10917.25, stdev=1690.97, samples=12 00:19:00.247 lat (usec) : 500=0.01%, 750=0.71%, 1000=2.88% 00:19:00.247 lat (msec) : 2=16.69%, 4=0.74%, 10=6.40%, 20=58.60%, 50=13.95% 00:19:00.247 lat (msec) : 100=0.02% 00:19:00.247 cpu : usr=99.06%, sys=0.20%, ctx=25, majf=0, minf=5577 00:19:00.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:00.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.247 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.247 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.247 00:19:00.247 Run status group 0 (all jobs): 00:19:00.247 READ: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=255MiB (267MB), run=8670-8670msec 00:19:00.247 WRITE: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=256MiB (268MB), run=5981-5981msec 00:19:00.247 ----------------------------------------------------- 00:19:00.247 Suppressions used: 00:19:00.247 count bytes template 00:19:00.247 1 5 /usr/src/fio/parse.c 00:19:00.247 2 192 /usr/src/fio/iolog.c 00:19:00.247 1 8 libtcmalloc_minimal.so 00:19:00.247 1 904 libcrypto.so 00:19:00.247 ----------------------------------------------------- 00:19:00.247 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:00.247 Remove shared memory files 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69555 /dev/shm/spdk_tgt_trace.pid85531 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:00.247 ************************************ 00:19:00.247 END TEST ftl_fio_basic 00:19:00.247 ************************************ 00:19:00.247 00:19:00.247 real 0m58.516s 00:19:00.247 user 2m6.403s 00:19:00.247 sys 0m2.731s 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.247 00:41:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:00.247 00:41:35 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:00.247 00:41:35 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:00.247 00:41:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.247 00:41:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:00.247 ************************************ 00:19:00.247 START TEST ftl_bdevperf 00:19:00.247 ************************************ 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:00.247 * Looking for test storage... 00:19:00.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.247 --rc genhtml_branch_coverage=1 00:19:00.247 --rc genhtml_function_coverage=1 00:19:00.247 --rc genhtml_legend=1 00:19:00.247 --rc geninfo_all_blocks=1 00:19:00.247 --rc geninfo_unexecuted_blocks=1 00:19:00.247 00:19:00.247 ' 00:19:00.247 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.247 --rc genhtml_branch_coverage=1 00:19:00.247 --rc genhtml_function_coverage=1 00:19:00.247 --rc genhtml_legend=1 00:19:00.247 --rc geninfo_all_blocks=1 00:19:00.247 --rc geninfo_unexecuted_blocks=1 00:19:00.248 00:19:00.248 ' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.248 --rc genhtml_branch_coverage=1 00:19:00.248 --rc genhtml_function_coverage=1 00:19:00.248 --rc genhtml_legend=1 00:19:00.248 --rc geninfo_all_blocks=1 00:19:00.248 --rc geninfo_unexecuted_blocks=1 00:19:00.248 00:19:00.248 ' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.248 --rc genhtml_branch_coverage=1 00:19:00.248 --rc genhtml_function_coverage=1 00:19:00.248 --rc genhtml_legend=1 00:19:00.248 --rc geninfo_all_blocks=1 00:19:00.248 --rc geninfo_unexecuted_blocks=1 00:19:00.248 00:19:00.248 ' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87368 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87368 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87368 ']' 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.248 00:41:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:00.248 [2024-11-27 00:41:36.021929] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:19:00.248 [2024-11-27 00:41:36.022252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87368 ] 00:19:00.248 [2024-11-27 00:41:36.176437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.248 [2024-11-27 00:41:36.206891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:00.248 00:41:36 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:00.509 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:00.771 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:00.771 { 00:19:00.772 "name": "nvme0n1", 00:19:00.772 "aliases": [ 00:19:00.772 "0576d878-81d6-4455-9edb-13aa8c9503d9" 00:19:00.772 ], 00:19:00.772 "product_name": "NVMe disk", 00:19:00.772 "block_size": 4096, 00:19:00.772 "num_blocks": 1310720, 00:19:00.772 "uuid": "0576d878-81d6-4455-9edb-13aa8c9503d9", 00:19:00.772 "numa_id": -1, 00:19:00.772 "assigned_rate_limits": { 00:19:00.772 "rw_ios_per_sec": 0, 00:19:00.772 "rw_mbytes_per_sec": 0, 00:19:00.772 "r_mbytes_per_sec": 0, 00:19:00.772 "w_mbytes_per_sec": 0 00:19:00.772 }, 00:19:00.772 "claimed": true, 00:19:00.772 "claim_type": "read_many_write_one", 00:19:00.772 "zoned": false, 00:19:00.772 "supported_io_types": { 00:19:00.772 "read": true, 00:19:00.772 "write": true, 00:19:00.772 "unmap": true, 00:19:00.772 "flush": true, 00:19:00.772 "reset": true, 00:19:00.772 "nvme_admin": true, 00:19:00.772 "nvme_io": true, 00:19:00.772 "nvme_io_md": false, 00:19:00.772 "write_zeroes": true, 00:19:00.772 "zcopy": false, 00:19:00.772 "get_zone_info": false, 00:19:00.772 "zone_management": false, 00:19:00.772 "zone_append": false, 00:19:00.772 "compare": true, 00:19:00.772 "compare_and_write": false, 00:19:00.772 "abort": true, 00:19:00.772 "seek_hole": false, 00:19:00.772 "seek_data": false, 00:19:00.772 "copy": true, 00:19:00.772 "nvme_iov_md": false 00:19:00.772 }, 00:19:00.772 "driver_specific": { 00:19:00.772 "nvme": [ 00:19:00.772 { 00:19:00.772 "pci_address": "0000:00:11.0", 00:19:00.772 "trid": { 00:19:00.772 "trtype": "PCIe", 00:19:00.772 "traddr": "0000:00:11.0" 00:19:00.772 }, 00:19:00.772 "ctrlr_data": { 00:19:00.772 "cntlid": 0, 00:19:00.772 "vendor_id": "0x1b36", 00:19:00.772 "model_number": "QEMU NVMe Ctrl", 00:19:00.772 "serial_number": "12341", 00:19:00.772 "firmware_revision": "8.0.0", 00:19:00.772 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:00.772 "oacs": { 00:19:00.772 "security": 0, 00:19:00.772 "format": 1, 00:19:00.772 "firmware": 0, 00:19:00.772 "ns_manage": 1 00:19:00.772 }, 00:19:00.772 "multi_ctrlr": false, 00:19:00.772 "ana_reporting": false 00:19:00.772 }, 00:19:00.772 "vs": { 00:19:00.772 "nvme_version": "1.4" 00:19:00.772 }, 00:19:00.772 "ns_data": { 00:19:00.772 "id": 1, 00:19:00.772 "can_share": false 00:19:00.772 } 00:19:00.772 } 00:19:00.772 ], 00:19:00.772 "mp_policy": "active_passive" 00:19:00.772 } 00:19:00.772 } 00:19:00.772 ]' 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:00.772 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:01.033 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=875c4723-5aef-40cd-ac8c-d21c1ef89334 00:19:01.033 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:01.033 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 875c4723-5aef-40cd-ac8c-d21c1ef89334 00:19:01.295 00:41:37 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2bf05d9b-2e5d-484d-9c27-bc025d443134 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2bf05d9b-2e5d-484d-9c27-bc025d443134 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=4082006f-b48a-44fb-99e7-99775c830812 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4082006f-b48a-44fb-99e7-99775c830812 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=4082006f-b48a-44fb-99e7-99775c830812 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 4082006f-b48a-44fb-99e7-99775c830812 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4082006f-b48a-44fb-99e7-99775c830812 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:01.556 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4082006f-b48a-44fb-99e7-99775c830812 00:19:01.817 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:01.817 { 00:19:01.818 "name": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:01.818 "aliases": [ 00:19:01.818 "lvs/nvme0n1p0" 00:19:01.818 ], 00:19:01.818 "product_name": "Logical Volume", 00:19:01.818 "block_size": 4096, 00:19:01.818 "num_blocks": 26476544, 00:19:01.818 "uuid": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:01.818 "assigned_rate_limits": { 00:19:01.818 "rw_ios_per_sec": 0, 00:19:01.818 "rw_mbytes_per_sec": 0, 00:19:01.818 "r_mbytes_per_sec": 0, 00:19:01.818 "w_mbytes_per_sec": 0 00:19:01.818 }, 00:19:01.818 "claimed": false, 00:19:01.818 "zoned": false, 00:19:01.818 "supported_io_types": { 00:19:01.818 "read": true, 00:19:01.818 "write": true, 00:19:01.818 "unmap": true, 00:19:01.818 "flush": false, 00:19:01.818 "reset": true, 00:19:01.818 "nvme_admin": false, 00:19:01.818 "nvme_io": false, 00:19:01.818 "nvme_io_md": false, 00:19:01.818 "write_zeroes": true, 00:19:01.818 "zcopy": false, 00:19:01.818 "get_zone_info": false, 00:19:01.818 "zone_management": false, 00:19:01.818 "zone_append": false, 00:19:01.818 "compare": false, 00:19:01.818 "compare_and_write": false, 00:19:01.818 "abort": false, 00:19:01.818 "seek_hole": true, 00:19:01.818 "seek_data": true, 00:19:01.818 "copy": false, 00:19:01.818 "nvme_iov_md": false 00:19:01.818 }, 00:19:01.818 "driver_specific": { 00:19:01.818 "lvol": { 00:19:01.818 "lvol_store_uuid": "2bf05d9b-2e5d-484d-9c27-bc025d443134", 00:19:01.818 "base_bdev": "nvme0n1", 00:19:01.818 "thin_provision": true, 00:19:01.818 "num_allocated_clusters": 0, 00:19:01.818 "snapshot": false, 00:19:01.818 "clone": false, 00:19:01.818 "esnap_clone": false 00:19:01.818 } 00:19:01.818 } 00:19:01.818 } 00:19:01.818 ]' 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:01.818 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 4082006f-b48a-44fb-99e7-99775c830812 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4082006f-b48a-44fb-99e7-99775c830812 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:02.079 00:41:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4082006f-b48a-44fb-99e7-99775c830812 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:02.340 { 00:19:02.340 "name": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:02.340 "aliases": [ 00:19:02.340 "lvs/nvme0n1p0" 00:19:02.340 ], 00:19:02.340 "product_name": "Logical Volume", 00:19:02.340 "block_size": 4096, 00:19:02.340 "num_blocks": 26476544, 00:19:02.340 "uuid": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:02.340 "assigned_rate_limits": { 00:19:02.340 "rw_ios_per_sec": 0, 00:19:02.340 "rw_mbytes_per_sec": 0, 00:19:02.340 "r_mbytes_per_sec": 0, 00:19:02.340 "w_mbytes_per_sec": 0 00:19:02.340 }, 00:19:02.340 "claimed": false, 00:19:02.340 "zoned": false, 00:19:02.340 "supported_io_types": { 00:19:02.340 "read": true, 00:19:02.340 "write": true, 00:19:02.340 "unmap": true, 00:19:02.340 "flush": false, 00:19:02.340 "reset": true, 00:19:02.340 "nvme_admin": false, 00:19:02.340 "nvme_io": false, 00:19:02.340 "nvme_io_md": false, 00:19:02.340 "write_zeroes": true, 00:19:02.340 "zcopy": false, 00:19:02.340 "get_zone_info": false, 00:19:02.340 "zone_management": false, 00:19:02.340 "zone_append": false, 00:19:02.340 "compare": false, 00:19:02.340 "compare_and_write": false, 00:19:02.340 "abort": false, 00:19:02.340 "seek_hole": true, 00:19:02.340 "seek_data": true, 00:19:02.340 "copy": false, 00:19:02.340 "nvme_iov_md": false 00:19:02.340 }, 00:19:02.340 "driver_specific": { 00:19:02.340 "lvol": { 00:19:02.340 "lvol_store_uuid": "2bf05d9b-2e5d-484d-9c27-bc025d443134", 00:19:02.340 "base_bdev": "nvme0n1", 00:19:02.340 "thin_provision": true, 00:19:02.340 "num_allocated_clusters": 0, 00:19:02.340 "snapshot": false, 00:19:02.340 "clone": false, 00:19:02.340 "esnap_clone": false 00:19:02.340 } 00:19:02.340 } 00:19:02.340 } 00:19:02.340 ]' 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:02.340 00:41:39 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 4082006f-b48a-44fb-99e7-99775c830812 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=4082006f-b48a-44fb-99e7-99775c830812 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:02.613 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4082006f-b48a-44fb-99e7-99775c830812 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:02.878 { 00:19:02.878 "name": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:02.878 "aliases": [ 00:19:02.878 "lvs/nvme0n1p0" 00:19:02.878 ], 00:19:02.878 "product_name": "Logical Volume", 00:19:02.878 "block_size": 4096, 00:19:02.878 "num_blocks": 26476544, 00:19:02.878 "uuid": "4082006f-b48a-44fb-99e7-99775c830812", 00:19:02.878 "assigned_rate_limits": { 00:19:02.878 "rw_ios_per_sec": 0, 00:19:02.878 "rw_mbytes_per_sec": 0, 00:19:02.878 "r_mbytes_per_sec": 0, 00:19:02.878 "w_mbytes_per_sec": 0 00:19:02.878 }, 00:19:02.878 "claimed": false, 00:19:02.878 "zoned": false, 00:19:02.878 "supported_io_types": { 00:19:02.878 "read": true, 00:19:02.878 "write": true, 00:19:02.878 "unmap": true, 00:19:02.878 "flush": false, 00:19:02.878 "reset": true, 00:19:02.878 "nvme_admin": false, 00:19:02.878 "nvme_io": false, 00:19:02.878 "nvme_io_md": false, 00:19:02.878 "write_zeroes": true, 00:19:02.878 "zcopy": false, 00:19:02.878 "get_zone_info": false, 00:19:02.878 "zone_management": false, 00:19:02.878 "zone_append": false, 00:19:02.878 "compare": false, 00:19:02.878 "compare_and_write": false, 00:19:02.878 "abort": false, 00:19:02.878 "seek_hole": true, 00:19:02.878 "seek_data": true, 00:19:02.878 "copy": false, 00:19:02.878 "nvme_iov_md": false 00:19:02.878 }, 00:19:02.878 "driver_specific": { 00:19:02.878 "lvol": { 00:19:02.878 "lvol_store_uuid": "2bf05d9b-2e5d-484d-9c27-bc025d443134", 00:19:02.878 "base_bdev": "nvme0n1", 00:19:02.878 "thin_provision": true, 00:19:02.878 "num_allocated_clusters": 0, 00:19:02.878 "snapshot": false, 00:19:02.878 "clone": false, 00:19:02.878 "esnap_clone": false 00:19:02.878 } 00:19:02.878 } 00:19:02.878 } 00:19:02.878 ]' 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:02.878 00:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4082006f-b48a-44fb-99e7-99775c830812 -c nvc0n1p0 --l2p_dram_limit 20 00:19:03.141 [2024-11-27 00:41:39.745330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.745372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:03.141 [2024-11-27 00:41:39.745391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:03.141 [2024-11-27 00:41:39.745400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.745450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.745460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.141 [2024-11-27 00:41:39.745476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:03.141 [2024-11-27 00:41:39.745485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.745504] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:03.141 [2024-11-27 00:41:39.745766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:03.141 [2024-11-27 00:41:39.745785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.745795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.141 [2024-11-27 00:41:39.745805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:03.141 [2024-11-27 00:41:39.745813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.745892] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dc7fe27d-5d4a-4009-ad61-c61b90fadd9e 00:19:03.141 [2024-11-27 00:41:39.746923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.747055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:03.141 [2024-11-27 00:41:39.747072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:03.141 [2024-11-27 00:41:39.747084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.752074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.752105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.141 [2024-11-27 00:41:39.752115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.955 ms 00:19:03.141 [2024-11-27 00:41:39.752125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.752192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.752202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.141 [2024-11-27 00:41:39.752215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:03.141 [2024-11-27 00:41:39.752226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.752274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.752286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:03.141 [2024-11-27 00:41:39.752294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:03.141 [2024-11-27 00:41:39.752302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.752321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:03.141 [2024-11-27 00:41:39.753742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.753772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.141 [2024-11-27 00:41:39.753787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:19:03.141 [2024-11-27 00:41:39.753794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.753823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.753831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:03.141 [2024-11-27 00:41:39.753842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:03.141 [2024-11-27 00:41:39.753850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.753881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:03.141 [2024-11-27 00:41:39.754019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:03.141 [2024-11-27 00:41:39.754034] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:03.141 [2024-11-27 00:41:39.754044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:03.141 [2024-11-27 00:41:39.754056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754065] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754075] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:03.141 [2024-11-27 00:41:39.754082] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:03.141 [2024-11-27 00:41:39.754092] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:03.141 [2024-11-27 00:41:39.754104] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:03.141 [2024-11-27 00:41:39.754113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.754120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:03.141 [2024-11-27 00:41:39.754130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:03.141 [2024-11-27 00:41:39.754137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.754227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.141 [2024-11-27 00:41:39.754236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:03.141 [2024-11-27 00:41:39.754245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:03.141 [2024-11-27 00:41:39.754252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.141 [2024-11-27 00:41:39.754343] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:03.141 [2024-11-27 00:41:39.754357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:03.141 [2024-11-27 00:41:39.754370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:03.141 [2024-11-27 00:41:39.754398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:03.141 [2024-11-27 00:41:39.754427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.141 [2024-11-27 00:41:39.754444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:03.141 [2024-11-27 00:41:39.754452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:03.141 [2024-11-27 00:41:39.754463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.141 [2024-11-27 00:41:39.754471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:03.141 [2024-11-27 00:41:39.754484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:03.141 [2024-11-27 00:41:39.754491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:03.141 [2024-11-27 00:41:39.754508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:03.141 [2024-11-27 00:41:39.754534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:03.141 [2024-11-27 00:41:39.754558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:03.141 [2024-11-27 00:41:39.754583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:03.141 [2024-11-27 00:41:39.754609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.141 [2024-11-27 00:41:39.754625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:03.141 [2024-11-27 00:41:39.754635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.141 [2024-11-27 00:41:39.754652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:03.141 [2024-11-27 00:41:39.754659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:03.141 [2024-11-27 00:41:39.754669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.141 [2024-11-27 00:41:39.754676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:03.141 [2024-11-27 00:41:39.754685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:03.141 [2024-11-27 00:41:39.754693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.141 [2024-11-27 00:41:39.754702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:03.141 [2024-11-27 00:41:39.754709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:03.142 [2024-11-27 00:41:39.754717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.142 [2024-11-27 00:41:39.754725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:03.142 [2024-11-27 00:41:39.754737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:03.142 [2024-11-27 00:41:39.754745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.142 [2024-11-27 00:41:39.754757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.142 [2024-11-27 00:41:39.754766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:03.142 [2024-11-27 00:41:39.754774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:03.142 [2024-11-27 00:41:39.754781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:03.142 [2024-11-27 00:41:39.754789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:03.142 [2024-11-27 00:41:39.754796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:03.142 [2024-11-27 00:41:39.754804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:03.142 [2024-11-27 00:41:39.754814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:03.142 [2024-11-27 00:41:39.754825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.754833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:03.142 [2024-11-27 00:41:39.754842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:03.142 [2024-11-27 00:41:39.754848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:03.142 [2024-11-27 00:41:39.755082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:03.142 [2024-11-27 00:41:39.755115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:03.142 [2024-11-27 00:41:39.755148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:03.142 [2024-11-27 00:41:39.755177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:03.142 [2024-11-27 00:41:39.755207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:03.142 [2024-11-27 00:41:39.755276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:03.142 [2024-11-27 00:41:39.755309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:03.142 [2024-11-27 00:41:39.755492] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:03.142 [2024-11-27 00:41:39.755532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:03.142 [2024-11-27 00:41:39.755626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:03.142 [2024-11-27 00:41:39.755656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:03.142 [2024-11-27 00:41:39.755706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:03.142 [2024-11-27 00:41:39.755739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.142 [2024-11-27 00:41:39.755762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:03.142 [2024-11-27 00:41:39.755782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:19:03.142 [2024-11-27 00:41:39.755837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.142 [2024-11-27 00:41:39.755909] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:03.142 [2024-11-27 00:41:39.755947] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:07.346 [2024-11-27 00:41:43.297127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.297434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:07.346 [2024-11-27 00:41:43.297522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3541.200 ms 00:19:07.346 [2024-11-27 00:41:43.297551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.311411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.311618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.346 [2024-11-27 00:41:43.311640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 00:19:07.346 [2024-11-27 00:41:43.311655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.311756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.311769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.346 [2024-11-27 00:41:43.311783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:07.346 [2024-11-27 00:41:43.311795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.330698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.330764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.346 [2024-11-27 00:41:43.330779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.851 ms 00:19:07.346 [2024-11-27 00:41:43.330790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.330831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.330847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.346 [2024-11-27 00:41:43.330885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.346 [2024-11-27 00:41:43.330896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.331479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.331524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.346 [2024-11-27 00:41:43.331536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:19:07.346 [2024-11-27 00:41:43.331550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.331677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.331700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.346 [2024-11-27 00:41:43.331714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:07.346 [2024-11-27 00:41:43.331725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.339607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.339823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.346 [2024-11-27 00:41:43.339842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.862 ms 00:19:07.346 [2024-11-27 00:41:43.339875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.350057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:07.346 [2024-11-27 00:41:43.358223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.358271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.346 [2024-11-27 00:41:43.358285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:19:07.346 [2024-11-27 00:41:43.358293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.446051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.446114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:07.346 [2024-11-27 00:41:43.446135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.719 ms 00:19:07.346 [2024-11-27 00:41:43.446148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.446366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.446380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.346 [2024-11-27 00:41:43.446391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:19:07.346 [2024-11-27 00:41:43.446400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.452161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.452212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:07.346 [2024-11-27 00:41:43.452226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.735 ms 00:19:07.346 [2024-11-27 00:41:43.452234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.457281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.457494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:07.346 [2024-11-27 00:41:43.457520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.987 ms 00:19:07.346 [2024-11-27 00:41:43.457528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.458008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.458044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.346 [2024-11-27 00:41:43.458062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:19:07.346 [2024-11-27 00:41:43.458072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.503333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.503380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:07.346 [2024-11-27 00:41:43.503395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.232 ms 00:19:07.346 [2024-11-27 00:41:43.503405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.510811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.510880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:07.346 [2024-11-27 00:41:43.510895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.317 ms 00:19:07.346 [2024-11-27 00:41:43.510904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.516553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.516600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:07.346 [2024-11-27 00:41:43.516613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.599 ms 00:19:07.346 [2024-11-27 00:41:43.516621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.522667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.522889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.346 [2024-11-27 00:41:43.522918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.997 ms 00:19:07.346 [2024-11-27 00:41:43.522926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.522973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.522993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.346 [2024-11-27 00:41:43.523004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:07.346 [2024-11-27 00:41:43.523013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.346 [2024-11-27 00:41:43.523089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.346 [2024-11-27 00:41:43.523100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.347 [2024-11-27 00:41:43.523111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:07.347 [2024-11-27 00:41:43.523120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.347 [2024-11-27 00:41:43.524242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3778.421 ms, result 0 00:19:07.347 { 00:19:07.347 "name": "ftl0", 00:19:07.347 "uuid": "dc7fe27d-5d4a-4009-ad61-c61b90fadd9e" 00:19:07.347 } 00:19:07.347 00:41:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:07.347 00:41:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:07.347 00:41:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:07.347 00:41:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:07.347 [2024-11-27 00:41:43.874390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:07.347 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:07.347 Zero copy mechanism will not be used. 00:19:07.347 Running I/O for 4 seconds... 00:19:09.230 713.00 IOPS, 47.35 MiB/s [2024-11-27T00:41:46.961Z] 688.50 IOPS, 45.72 MiB/s [2024-11-27T00:41:47.904Z] 706.00 IOPS, 46.88 MiB/s [2024-11-27T00:41:47.904Z] 730.00 IOPS, 48.48 MiB/s 00:19:11.117 Latency(us) 00:19:11.117 [2024-11-27T00:41:47.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.117 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:11.117 ftl0 : 4.00 729.87 48.47 0.00 0.00 1461.16 348.16 3629.69 00:19:11.117 [2024-11-27T00:41:47.904Z] =================================================================================================================== 00:19:11.117 [2024-11-27T00:41:47.904Z] Total : 729.87 48.47 0.00 0.00 1461.16 348.16 3629.69 00:19:11.117 { 00:19:11.117 "results": [ 00:19:11.117 { 00:19:11.117 "job": "ftl0", 00:19:11.117 "core_mask": "0x1", 00:19:11.117 "workload": "randwrite", 00:19:11.117 "status": "finished", 00:19:11.117 "queue_depth": 1, 00:19:11.117 "io_size": 69632, 00:19:11.117 "runtime": 4.00208, 00:19:11.117 "iops": 729.8704673569744, 00:19:11.117 "mibps": 48.467960722924076, 00:19:11.117 "io_failed": 0, 00:19:11.117 "io_timeout": 0, 00:19:11.117 "avg_latency_us": 1461.1616801411528, 00:19:11.117 "min_latency_us": 348.16, 00:19:11.117 "max_latency_us": 3629.686153846154 00:19:11.117 } 00:19:11.117 ], 00:19:11.117 "core_count": 1 00:19:11.117 } 00:19:11.117 [2024-11-27 00:41:47.882766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:11.117 00:41:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:11.378 [2024-11-27 00:41:47.991264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:11.378 Running I/O for 4 seconds... 00:19:13.263 8186.00 IOPS, 31.98 MiB/s [2024-11-27T00:41:51.433Z] 6988.50 IOPS, 27.30 MiB/s [2024-11-27T00:41:52.002Z] 6456.33 IOPS, 25.22 MiB/s [2024-11-27T00:41:52.263Z] 6092.25 IOPS, 23.80 MiB/s 00:19:15.476 Latency(us) 00:19:15.476 [2024-11-27T00:41:52.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.476 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:15.476 ftl0 : 4.04 6066.76 23.70 0.00 0.00 21004.57 264.66 42951.29 00:19:15.476 [2024-11-27T00:41:52.263Z] =================================================================================================================== 00:19:15.476 [2024-11-27T00:41:52.263Z] Total : 6066.76 23.70 0.00 0.00 21004.57 0.00 42951.29 00:19:15.476 { 00:19:15.476 "results": [ 00:19:15.476 { 00:19:15.476 "job": "ftl0", 00:19:15.476 "core_mask": "0x1", 00:19:15.476 "workload": "randwrite", 00:19:15.476 "status": "finished", 00:19:15.476 "queue_depth": 128, 00:19:15.476 "io_size": 4096, 00:19:15.476 "runtime": 4.037078, 00:19:15.476 "iops": 6066.7641298979115, 00:19:15.476 "mibps": 23.698297382413717, 00:19:15.476 "io_failed": 0, 00:19:15.476 "io_timeout": 0, 00:19:15.476 "avg_latency_us": 21004.56760311059, 00:19:15.476 "min_latency_us": 264.6646153846154, 00:19:15.476 "max_latency_us": 42951.28615384615 00:19:15.476 } 00:19:15.476 ], 00:19:15.476 "core_count": 1 00:19:15.476 } 00:19:15.476 [2024-11-27 00:41:52.035171] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:15.476 00:41:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:15.476 [2024-11-27 00:41:52.154490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:15.476 Running I/O for 4 seconds... 00:19:17.805 4628.00 IOPS, 18.08 MiB/s [2024-11-27T00:41:55.165Z] 4807.00 IOPS, 18.78 MiB/s [2024-11-27T00:41:56.552Z] 4817.67 IOPS, 18.82 MiB/s [2024-11-27T00:41:56.552Z] 4814.75 IOPS, 18.81 MiB/s 00:19:19.765 Latency(us) 00:19:19.765 [2024-11-27T00:41:56.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.765 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.765 Verification LBA range: start 0x0 length 0x1400000 00:19:19.765 ftl0 : 4.02 4827.40 18.86 0.00 0.00 26433.79 345.01 35490.26 00:19:19.765 [2024-11-27T00:41:56.552Z] =================================================================================================================== 00:19:19.765 [2024-11-27T00:41:56.552Z] Total : 4827.40 18.86 0.00 0.00 26433.79 0.00 35490.26 00:19:19.765 [2024-11-27 00:41:56.179289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:19.765 { 00:19:19.765 "results": [ 00:19:19.765 { 00:19:19.765 "job": "ftl0", 00:19:19.765 "core_mask": "0x1", 00:19:19.765 "workload": "verify", 00:19:19.765 "status": "finished", 00:19:19.765 "verify_range": { 00:19:19.765 "start": 0, 00:19:19.765 "length": 20971520 00:19:19.765 }, 00:19:19.765 "queue_depth": 128, 00:19:19.765 "io_size": 4096, 00:19:19.765 "runtime": 4.015826, 00:19:19.765 "iops": 4827.400390355559, 00:19:19.765 "mibps": 18.8570327748264, 00:19:19.765 "io_failed": 0, 00:19:19.765 "io_timeout": 0, 00:19:19.765 "avg_latency_us": 26433.7886823957, 00:19:19.765 "min_latency_us": 345.0092307692308, 00:19:19.765 "max_latency_us": 35490.264615384614 00:19:19.765 } 00:19:19.765 ], 00:19:19.765 "core_count": 1 00:19:19.765 } 00:19:19.765 00:41:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:19.765 [2024-11-27 00:41:56.395603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.765 [2024-11-27 00:41:56.395669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:19.765 [2024-11-27 00:41:56.395698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:19.765 [2024-11-27 00:41:56.395711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.765 [2024-11-27 00:41:56.395752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.765 [2024-11-27 00:41:56.396568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.765 [2024-11-27 00:41:56.396637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:19.765 [2024-11-27 00:41:56.396654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:19:19.765 [2024-11-27 00:41:56.396670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.765 [2024-11-27 00:41:56.400299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.765 [2024-11-27 00:41:56.400474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:19.765 [2024-11-27 00:41:56.400652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:19:19.765 [2024-11-27 00:41:56.400683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.611093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.611296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.029 [2024-11-27 00:41:56.611325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 210.373 ms 00:19:20.029 [2024-11-27 00:41:56.611348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.617740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.617804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.029 [2024-11-27 00:41:56.617822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.340 ms 00:19:20.029 [2024-11-27 00:41:56.617838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.620890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.620961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.029 [2024-11-27 00:41:56.620976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:19:20.029 [2024-11-27 00:41:56.620990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.627490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.627554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.029 [2024-11-27 00:41:56.627571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:19:20.029 [2024-11-27 00:41:56.627589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.627756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.627776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.029 [2024-11-27 00:41:56.627792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:20.029 [2024-11-27 00:41:56.627808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.631154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.631346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.029 [2024-11-27 00:41:56.631369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:19:20.029 [2024-11-27 00:41:56.631382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.634506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.634614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.029 [2024-11-27 00:41:56.634632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:19:20.029 [2024-11-27 00:41:56.634646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.636912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.636969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.029 [2024-11-27 00:41:56.636984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:19:20.029 [2024-11-27 00:41:56.637012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.639167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.029 [2024-11-27 00:41:56.639228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.029 [2024-11-27 00:41:56.639242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:19:20.029 [2024-11-27 00:41:56.639254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.029 [2024-11-27 00:41:56.639306] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.029 [2024-11-27 00:41:56.639334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.639998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.029 [2024-11-27 00:41:56.640123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.030 [2024-11-27 00:41:56.640905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.030 [2024-11-27 00:41:56.640920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc7fe27d-5d4a-4009-ad61-c61b90fadd9e 00:19:20.030 [2024-11-27 00:41:56.640935] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.030 [2024-11-27 00:41:56.640948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.030 [2024-11-27 00:41:56.640962] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.030 [2024-11-27 00:41:56.640974] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.030 [2024-11-27 00:41:56.640995] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.030 [2024-11-27 00:41:56.641009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.030 [2024-11-27 00:41:56.641026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.030 [2024-11-27 00:41:56.641039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.030 [2024-11-27 00:41:56.641053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.030 [2024-11-27 00:41:56.641069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.030 [2024-11-27 00:41:56.641090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.030 [2024-11-27 00:41:56.641108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.764 ms 00:19:20.030 [2024-11-27 00:41:56.641123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.643623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.030 [2024-11-27 00:41:56.643673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.030 [2024-11-27 00:41:56.643691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.470 ms 00:19:20.030 [2024-11-27 00:41:56.643708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.643846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.030 [2024-11-27 00:41:56.643890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.030 [2024-11-27 00:41:56.643905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:20.030 [2024-11-27 00:41:56.643924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.651987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.652059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.030 [2024-11-27 00:41:56.652077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.652096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.652177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.652196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.030 [2024-11-27 00:41:56.652209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.652225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.652308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.652327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.030 [2024-11-27 00:41:56.652341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.652357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.652380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.652396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.030 [2024-11-27 00:41:56.652419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.652437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.666640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.666702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.030 [2024-11-27 00:41:56.666720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.666734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.678415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.678482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.030 [2024-11-27 00:41:56.678497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.678517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.030 [2024-11-27 00:41:56.678623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.030 [2024-11-27 00:41:56.678643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.030 [2024-11-27 00:41:56.678662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.030 [2024-11-27 00:41:56.678678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.678770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.031 [2024-11-27 00:41:56.678790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.031 [2024-11-27 00:41:56.678804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.031 [2024-11-27 00:41:56.678826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.678956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.031 [2024-11-27 00:41:56.678977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.031 [2024-11-27 00:41:56.678991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.031 [2024-11-27 00:41:56.679012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.679066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.031 [2024-11-27 00:41:56.679085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.031 [2024-11-27 00:41:56.679101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.031 [2024-11-27 00:41:56.679118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.679179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.031 [2024-11-27 00:41:56.679200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.031 [2024-11-27 00:41:56.679215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.031 [2024-11-27 00:41:56.679232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.679298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.031 [2024-11-27 00:41:56.679318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.031 [2024-11-27 00:41:56.679342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.031 [2024-11-27 00:41:56.679366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.031 [2024-11-27 00:41:56.679566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.883 ms, result 0 00:19:20.031 true 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87368 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87368 ']' 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87368 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87368 00:19:20.031 killing process with pid 87368 00:19:20.031 Received shutdown signal, test time was about 4.000000 seconds 00:19:20.031 00:19:20.031 Latency(us) 00:19:20.031 [2024-11-27T00:41:56.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.031 [2024-11-27T00:41:56.818Z] =================================================================================================================== 00:19:20.031 [2024-11-27T00:41:56.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87368' 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87368 00:19:20.031 00:41:56 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87368 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:20.292 Remove shared memory files 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:20.292 00:41:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:20.554 ************************************ 00:19:20.554 END TEST ftl_bdevperf 00:19:20.554 ************************************ 00:19:20.554 00:19:20.554 real 0m21.276s 00:19:20.554 user 0m23.885s 00:19:20.554 sys 0m0.947s 00:19:20.554 00:41:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.554 00:41:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.554 00:41:57 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.554 00:41:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:20.554 00:41:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.554 00:41:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:20.554 ************************************ 00:19:20.554 START TEST ftl_trim 00:19:20.554 ************************************ 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.554 * Looking for test storage... 00:19:20.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.554 00:41:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.554 --rc genhtml_branch_coverage=1 00:19:20.554 --rc genhtml_function_coverage=1 00:19:20.554 --rc genhtml_legend=1 00:19:20.554 --rc geninfo_all_blocks=1 00:19:20.554 --rc geninfo_unexecuted_blocks=1 00:19:20.554 00:19:20.554 ' 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.554 --rc genhtml_branch_coverage=1 00:19:20.554 --rc genhtml_function_coverage=1 00:19:20.554 --rc genhtml_legend=1 00:19:20.554 --rc geninfo_all_blocks=1 00:19:20.554 --rc geninfo_unexecuted_blocks=1 00:19:20.554 00:19:20.554 ' 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.554 --rc genhtml_branch_coverage=1 00:19:20.554 --rc genhtml_function_coverage=1 00:19:20.554 --rc genhtml_legend=1 00:19:20.554 --rc geninfo_all_blocks=1 00:19:20.554 --rc geninfo_unexecuted_blocks=1 00:19:20.554 00:19:20.554 ' 00:19:20.554 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.554 --rc genhtml_branch_coverage=1 00:19:20.554 --rc genhtml_function_coverage=1 00:19:20.554 --rc genhtml_legend=1 00:19:20.554 --rc geninfo_all_blocks=1 00:19:20.554 --rc geninfo_unexecuted_blocks=1 00:19:20.554 00:19:20.554 ' 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:20.554 00:41:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87709 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87709 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87709 ']' 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.555 00:41:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.555 00:41:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:20.815 [2024-11-27 00:41:57.423311] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:19:20.815 [2024-11-27 00:41:57.423691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87709 ] 00:19:20.815 [2024-11-27 00:41:57.589690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.076 [2024-11-27 00:41:57.621519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.076 [2024-11-27 00:41:57.621819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.076 [2024-11-27 00:41:57.621898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.646 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.646 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:21.646 00:41:58 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:21.908 00:41:58 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:21.908 00:41:58 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:21.908 00:41:58 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:21.908 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:21.908 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:21.908 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:21.908 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:21.908 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.169 { 00:19:22.169 "name": "nvme0n1", 00:19:22.169 "aliases": [ 00:19:22.169 "21c3e60f-027a-4f81-8331-1355405ce20a" 00:19:22.169 ], 00:19:22.169 "product_name": "NVMe disk", 00:19:22.169 "block_size": 4096, 00:19:22.169 "num_blocks": 1310720, 00:19:22.169 "uuid": "21c3e60f-027a-4f81-8331-1355405ce20a", 00:19:22.169 "numa_id": -1, 00:19:22.169 "assigned_rate_limits": { 00:19:22.169 "rw_ios_per_sec": 0, 00:19:22.169 "rw_mbytes_per_sec": 0, 00:19:22.169 "r_mbytes_per_sec": 0, 00:19:22.169 "w_mbytes_per_sec": 0 00:19:22.169 }, 00:19:22.169 "claimed": true, 00:19:22.169 "claim_type": "read_many_write_one", 00:19:22.169 "zoned": false, 00:19:22.169 "supported_io_types": { 00:19:22.169 "read": true, 00:19:22.169 "write": true, 00:19:22.169 "unmap": true, 00:19:22.169 "flush": true, 00:19:22.169 "reset": true, 00:19:22.169 "nvme_admin": true, 00:19:22.169 "nvme_io": true, 00:19:22.169 "nvme_io_md": false, 00:19:22.169 "write_zeroes": true, 00:19:22.169 "zcopy": false, 00:19:22.169 "get_zone_info": false, 00:19:22.169 "zone_management": false, 00:19:22.169 "zone_append": false, 00:19:22.169 "compare": true, 00:19:22.169 "compare_and_write": false, 00:19:22.169 "abort": true, 00:19:22.169 "seek_hole": false, 00:19:22.169 "seek_data": false, 00:19:22.169 "copy": true, 00:19:22.169 "nvme_iov_md": false 00:19:22.169 }, 00:19:22.169 "driver_specific": { 00:19:22.169 "nvme": [ 00:19:22.169 { 00:19:22.169 "pci_address": "0000:00:11.0", 00:19:22.169 "trid": { 00:19:22.169 "trtype": "PCIe", 00:19:22.169 "traddr": "0000:00:11.0" 00:19:22.169 }, 00:19:22.169 "ctrlr_data": { 00:19:22.169 "cntlid": 0, 00:19:22.169 "vendor_id": "0x1b36", 00:19:22.169 "model_number": "QEMU NVMe Ctrl", 00:19:22.169 "serial_number": "12341", 00:19:22.169 "firmware_revision": "8.0.0", 00:19:22.169 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:22.169 "oacs": { 00:19:22.169 "security": 0, 00:19:22.169 "format": 1, 00:19:22.169 "firmware": 0, 00:19:22.169 "ns_manage": 1 00:19:22.169 }, 00:19:22.169 "multi_ctrlr": false, 00:19:22.169 "ana_reporting": false 00:19:22.169 }, 00:19:22.169 "vs": { 00:19:22.169 "nvme_version": "1.4" 00:19:22.169 }, 00:19:22.169 "ns_data": { 00:19:22.169 "id": 1, 00:19:22.169 "can_share": false 00:19:22.169 } 00:19:22.169 } 00:19:22.169 ], 00:19:22.169 "mp_policy": "active_passive" 00:19:22.169 } 00:19:22.169 } 00:19:22.169 ]' 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:22.169 00:41:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:22.169 00:41:58 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:22.169 00:41:58 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:22.169 00:41:58 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:22.170 00:41:58 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:22.170 00:41:58 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:22.431 00:41:59 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2bf05d9b-2e5d-484d-9c27-bc025d443134 00:19:22.431 00:41:59 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:22.431 00:41:59 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2bf05d9b-2e5d-484d-9c27-bc025d443134 00:19:22.691 00:41:59 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c3ccb55b-1325-44cb-a8a0-0cfe56325f2f 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c3ccb55b-1325-44cb-a8a0-0cfe56325f2f 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ab997be7-7519-4581-8242-c38c56020dbd 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ab997be7-7519-4581-8242-c38c56020dbd 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ab997be7-7519-4581-8242-c38c56020dbd 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:22.951 00:41:59 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ab997be7-7519-4581-8242-c38c56020dbd 00:19:22.951 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab997be7-7519-4581-8242-c38c56020dbd 00:19:22.951 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.951 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.951 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.951 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.211 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.211 { 00:19:23.211 "name": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:23.211 "aliases": [ 00:19:23.211 "lvs/nvme0n1p0" 00:19:23.211 ], 00:19:23.211 "product_name": "Logical Volume", 00:19:23.211 "block_size": 4096, 00:19:23.211 "num_blocks": 26476544, 00:19:23.211 "uuid": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:23.211 "assigned_rate_limits": { 00:19:23.211 "rw_ios_per_sec": 0, 00:19:23.211 "rw_mbytes_per_sec": 0, 00:19:23.211 "r_mbytes_per_sec": 0, 00:19:23.211 "w_mbytes_per_sec": 0 00:19:23.211 }, 00:19:23.211 "claimed": false, 00:19:23.211 "zoned": false, 00:19:23.211 "supported_io_types": { 00:19:23.211 "read": true, 00:19:23.211 "write": true, 00:19:23.211 "unmap": true, 00:19:23.211 "flush": false, 00:19:23.211 "reset": true, 00:19:23.211 "nvme_admin": false, 00:19:23.211 "nvme_io": false, 00:19:23.211 "nvme_io_md": false, 00:19:23.211 "write_zeroes": true, 00:19:23.211 "zcopy": false, 00:19:23.211 "get_zone_info": false, 00:19:23.211 "zone_management": false, 00:19:23.211 "zone_append": false, 00:19:23.211 "compare": false, 00:19:23.211 "compare_and_write": false, 00:19:23.211 "abort": false, 00:19:23.211 "seek_hole": true, 00:19:23.211 "seek_data": true, 00:19:23.211 "copy": false, 00:19:23.211 "nvme_iov_md": false 00:19:23.211 }, 00:19:23.211 "driver_specific": { 00:19:23.211 "lvol": { 00:19:23.211 "lvol_store_uuid": "c3ccb55b-1325-44cb-a8a0-0cfe56325f2f", 00:19:23.211 "base_bdev": "nvme0n1", 00:19:23.211 "thin_provision": true, 00:19:23.211 "num_allocated_clusters": 0, 00:19:23.211 "snapshot": false, 00:19:23.211 "clone": false, 00:19:23.211 "esnap_clone": false 00:19:23.211 } 00:19:23.211 } 00:19:23.211 } 00:19:23.211 ]' 00:19:23.211 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.211 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.211 00:41:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.471 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.471 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.471 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.471 00:42:00 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:23.471 00:42:00 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:23.471 00:42:00 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:23.732 00:42:00 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:23.732 00:42:00 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:23.732 00:42:00 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.732 { 00:19:23.732 "name": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:23.732 "aliases": [ 00:19:23.732 "lvs/nvme0n1p0" 00:19:23.732 ], 00:19:23.732 "product_name": "Logical Volume", 00:19:23.732 "block_size": 4096, 00:19:23.732 "num_blocks": 26476544, 00:19:23.732 "uuid": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:23.732 "assigned_rate_limits": { 00:19:23.732 "rw_ios_per_sec": 0, 00:19:23.732 "rw_mbytes_per_sec": 0, 00:19:23.732 "r_mbytes_per_sec": 0, 00:19:23.732 "w_mbytes_per_sec": 0 00:19:23.732 }, 00:19:23.732 "claimed": false, 00:19:23.732 "zoned": false, 00:19:23.732 "supported_io_types": { 00:19:23.732 "read": true, 00:19:23.732 "write": true, 00:19:23.732 "unmap": true, 00:19:23.732 "flush": false, 00:19:23.732 "reset": true, 00:19:23.732 "nvme_admin": false, 00:19:23.732 "nvme_io": false, 00:19:23.732 "nvme_io_md": false, 00:19:23.732 "write_zeroes": true, 00:19:23.732 "zcopy": false, 00:19:23.732 "get_zone_info": false, 00:19:23.732 "zone_management": false, 00:19:23.732 "zone_append": false, 00:19:23.732 "compare": false, 00:19:23.732 "compare_and_write": false, 00:19:23.732 "abort": false, 00:19:23.732 "seek_hole": true, 00:19:23.732 "seek_data": true, 00:19:23.732 "copy": false, 00:19:23.732 "nvme_iov_md": false 00:19:23.732 }, 00:19:23.732 "driver_specific": { 00:19:23.732 "lvol": { 00:19:23.732 "lvol_store_uuid": "c3ccb55b-1325-44cb-a8a0-0cfe56325f2f", 00:19:23.732 "base_bdev": "nvme0n1", 00:19:23.732 "thin_provision": true, 00:19:23.732 "num_allocated_clusters": 0, 00:19:23.732 "snapshot": false, 00:19:23.732 "clone": false, 00:19:23.732 "esnap_clone": false 00:19:23.732 } 00:19:23.732 } 00:19:23.732 } 00:19:23.732 ]' 00:19:23.732 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.993 00:42:00 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:23.993 00:42:00 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:23.993 00:42:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:23.993 00:42:00 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:23.993 00:42:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ab997be7-7519-4581-8242-c38c56020dbd 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.993 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab997be7-7519-4581-8242-c38c56020dbd 00:19:24.276 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:24.276 { 00:19:24.276 "name": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:24.276 "aliases": [ 00:19:24.276 "lvs/nvme0n1p0" 00:19:24.276 ], 00:19:24.276 "product_name": "Logical Volume", 00:19:24.276 "block_size": 4096, 00:19:24.276 "num_blocks": 26476544, 00:19:24.276 "uuid": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:24.276 "assigned_rate_limits": { 00:19:24.276 "rw_ios_per_sec": 0, 00:19:24.276 "rw_mbytes_per_sec": 0, 00:19:24.276 "r_mbytes_per_sec": 0, 00:19:24.276 "w_mbytes_per_sec": 0 00:19:24.276 }, 00:19:24.276 "claimed": false, 00:19:24.276 "zoned": false, 00:19:24.276 "supported_io_types": { 00:19:24.276 "read": true, 00:19:24.276 "write": true, 00:19:24.276 "unmap": true, 00:19:24.276 "flush": false, 00:19:24.276 "reset": true, 00:19:24.276 "nvme_admin": false, 00:19:24.276 "nvme_io": false, 00:19:24.276 "nvme_io_md": false, 00:19:24.276 "write_zeroes": true, 00:19:24.276 "zcopy": false, 00:19:24.276 "get_zone_info": false, 00:19:24.276 "zone_management": false, 00:19:24.276 "zone_append": false, 00:19:24.276 "compare": false, 00:19:24.276 "compare_and_write": false, 00:19:24.276 "abort": false, 00:19:24.276 "seek_hole": true, 00:19:24.276 "seek_data": true, 00:19:24.276 "copy": false, 00:19:24.276 "nvme_iov_md": false 00:19:24.276 }, 00:19:24.276 "driver_specific": { 00:19:24.276 "lvol": { 00:19:24.276 "lvol_store_uuid": "c3ccb55b-1325-44cb-a8a0-0cfe56325f2f", 00:19:24.276 "base_bdev": "nvme0n1", 00:19:24.276 "thin_provision": true, 00:19:24.276 "num_allocated_clusters": 0, 00:19:24.276 "snapshot": false, 00:19:24.276 "clone": false, 00:19:24.276 "esnap_clone": false 00:19:24.276 } 00:19:24.276 } 00:19:24.276 } 00:19:24.276 ]' 00:19:24.276 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:24.276 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:24.276 00:42:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:24.276 00:42:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:24.276 00:42:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:24.276 00:42:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:24.276 00:42:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:24.276 00:42:01 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ab997be7-7519-4581-8242-c38c56020dbd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:24.536 [2024-11-27 00:42:01.199608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.199648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.536 [2024-11-27 00:42:01.199661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.536 [2024-11-27 00:42:01.199669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.201564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.201595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.536 [2024-11-27 00:42:01.201603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:19:24.536 [2024-11-27 00:42:01.201613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.201773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.536 [2024-11-27 00:42:01.201980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.536 [2024-11-27 00:42:01.201996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.202006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.536 [2024-11-27 00:42:01.202012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:24.536 [2024-11-27 00:42:01.202019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.202291] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:19:24.536 [2024-11-27 00:42:01.203317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.203344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:24.536 [2024-11-27 00:42:01.203353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:24.536 [2024-11-27 00:42:01.203361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.208538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.208576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.536 [2024-11-27 00:42:01.208585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:19:24.536 [2024-11-27 00:42:01.208591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.208680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.536 [2024-11-27 00:42:01.208704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.536 [2024-11-27 00:42:01.208714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:24.536 [2024-11-27 00:42:01.208720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.536 [2024-11-27 00:42:01.208775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.537 [2024-11-27 00:42:01.208782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.537 [2024-11-27 00:42:01.208789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:24.537 [2024-11-27 00:42:01.208794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.537 [2024-11-27 00:42:01.208829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.537 [2024-11-27 00:42:01.210111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.537 [2024-11-27 00:42:01.210139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.537 [2024-11-27 00:42:01.210147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:19:24.537 [2024-11-27 00:42:01.210153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.537 [2024-11-27 00:42:01.210199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.537 [2024-11-27 00:42:01.210215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.537 [2024-11-27 00:42:01.210221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:24.537 [2024-11-27 00:42:01.210229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.537 [2024-11-27 00:42:01.210251] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:24.537 [2024-11-27 00:42:01.210361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.537 [2024-11-27 00:42:01.210375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.537 [2024-11-27 00:42:01.210394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.537 [2024-11-27 00:42:01.210401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210409] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210415] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.537 [2024-11-27 00:42:01.210422] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.537 [2024-11-27 00:42:01.210428] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.537 [2024-11-27 00:42:01.210437] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.537 [2024-11-27 00:42:01.210442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.537 [2024-11-27 00:42:01.210449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.537 [2024-11-27 00:42:01.210455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:19:24.537 [2024-11-27 00:42:01.210461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.537 [2024-11-27 00:42:01.210540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.537 [2024-11-27 00:42:01.210549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.537 [2024-11-27 00:42:01.210555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:24.537 [2024-11-27 00:42:01.210562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.537 [2024-11-27 00:42:01.210667] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.537 [2024-11-27 00:42:01.210681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.537 [2024-11-27 00:42:01.210687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.537 [2024-11-27 00:42:01.210707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.537 [2024-11-27 00:42:01.210724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.537 [2024-11-27 00:42:01.210735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.537 [2024-11-27 00:42:01.210742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.537 [2024-11-27 00:42:01.210747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.537 [2024-11-27 00:42:01.210754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.537 [2024-11-27 00:42:01.210759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.537 [2024-11-27 00:42:01.210767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.537 [2024-11-27 00:42:01.210782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.537 [2024-11-27 00:42:01.210801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.537 [2024-11-27 00:42:01.210820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.537 [2024-11-27 00:42:01.210838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.537 [2024-11-27 00:42:01.210881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.537 [2024-11-27 00:42:01.210900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.537 [2024-11-27 00:42:01.210912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.537 [2024-11-27 00:42:01.210919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.537 [2024-11-27 00:42:01.210926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.537 [2024-11-27 00:42:01.210933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.537 [2024-11-27 00:42:01.210939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.537 [2024-11-27 00:42:01.210946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.537 [2024-11-27 00:42:01.210959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.537 [2024-11-27 00:42:01.210964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.210972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.537 [2024-11-27 00:42:01.210978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.537 [2024-11-27 00:42:01.210987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.537 [2024-11-27 00:42:01.210993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.537 [2024-11-27 00:42:01.211011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.537 [2024-11-27 00:42:01.211017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.537 [2024-11-27 00:42:01.211025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.537 [2024-11-27 00:42:01.211031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.537 [2024-11-27 00:42:01.211038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.537 [2024-11-27 00:42:01.211044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.537 [2024-11-27 00:42:01.211053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.537 [2024-11-27 00:42:01.211061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.537 [2024-11-27 00:42:01.211070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.537 [2024-11-27 00:42:01.211076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.537 [2024-11-27 00:42:01.211084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.537 [2024-11-27 00:42:01.211091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.537 [2024-11-27 00:42:01.211098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.537 [2024-11-27 00:42:01.211105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.537 [2024-11-27 00:42:01.211114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.537 [2024-11-27 00:42:01.211120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.537 [2024-11-27 00:42:01.211128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.537 [2024-11-27 00:42:01.211134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.537 [2024-11-27 00:42:01.211143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.537 [2024-11-27 00:42:01.211149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.537 [2024-11-27 00:42:01.211157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.538 [2024-11-27 00:42:01.211163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.538 [2024-11-27 00:42:01.211171] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.538 [2024-11-27 00:42:01.211179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.538 [2024-11-27 00:42:01.211195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.538 [2024-11-27 00:42:01.211201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.538 [2024-11-27 00:42:01.211209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.538 [2024-11-27 00:42:01.211215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.538 [2024-11-27 00:42:01.211222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.538 [2024-11-27 00:42:01.211227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.538 [2024-11-27 00:42:01.211235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:19:24.538 [2024-11-27 00:42:01.211241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.538 [2024-11-27 00:42:01.211331] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:24.538 [2024-11-27 00:42:01.211345] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:27.144 [2024-11-27 00:42:03.841020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.841081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:27.144 [2024-11-27 00:42:03.841100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2629.674 ms 00:19:27.144 [2024-11-27 00:42:03.841109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.849736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.849779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.144 [2024-11-27 00:42:03.849793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.500 ms 00:19:27.144 [2024-11-27 00:42:03.849813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.849964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.849975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.144 [2024-11-27 00:42:03.849987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:27.144 [2024-11-27 00:42:03.849994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.868590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.868635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.144 [2024-11-27 00:42:03.868649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.557 ms 00:19:27.144 [2024-11-27 00:42:03.868657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.868742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.868757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.144 [2024-11-27 00:42:03.868767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.144 [2024-11-27 00:42:03.868775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.869116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.869139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.144 [2024-11-27 00:42:03.869161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:27.144 [2024-11-27 00:42:03.869169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.869311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.869321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.144 [2024-11-27 00:42:03.869334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:27.144 [2024-11-27 00:42:03.869342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.875291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.875337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.144 [2024-11-27 00:42:03.875351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.910 ms 00:19:27.144 [2024-11-27 00:42:03.875361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.144 [2024-11-27 00:42:03.884794] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.144 [2024-11-27 00:42:03.899708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.144 [2024-11-27 00:42:03.899746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.144 [2024-11-27 00:42:03.899756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.252 ms 00:19:27.144 [2024-11-27 00:42:03.899766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.406 [2024-11-27 00:42:03.958626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.406 [2024-11-27 00:42:03.958667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:27.406 [2024-11-27 00:42:03.958678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.790 ms 00:19:27.406 [2024-11-27 00:42:03.958693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.958877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.958890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.407 [2024-11-27 00:42:03.958898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:27.407 [2024-11-27 00:42:03.958907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.962137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.962171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:27.407 [2024-11-27 00:42:03.962180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:19:27.407 [2024-11-27 00:42:03.962190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.965041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.965074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:27.407 [2024-11-27 00:42:03.965084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:19:27.407 [2024-11-27 00:42:03.965093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.965393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.965416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.407 [2024-11-27 00:42:03.965436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:27.407 [2024-11-27 00:42:03.965448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.994716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.994752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:27.407 [2024-11-27 00:42:03.994765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.235 ms 00:19:27.407 [2024-11-27 00:42:03.994775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:03.998593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:03.998629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:27.407 [2024-11-27 00:42:03.998640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:19:27.407 [2024-11-27 00:42:03.998651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:04.001692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:04.001727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:27.407 [2024-11-27 00:42:04.001737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:19:27.407 [2024-11-27 00:42:04.001747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:04.005269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:04.005306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.407 [2024-11-27 00:42:04.005315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.462 ms 00:19:27.407 [2024-11-27 00:42:04.005325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:04.005384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:04.005395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.407 [2024-11-27 00:42:04.005404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.407 [2024-11-27 00:42:04.005413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:04.005489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.407 [2024-11-27 00:42:04.005500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.407 [2024-11-27 00:42:04.005508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:27.407 [2024-11-27 00:42:04.005517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.407 [2024-11-27 00:42:04.006429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.407 [2024-11-27 00:42:04.007433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2806.505 ms, result 0 00:19:27.407 [2024-11-27 00:42:04.008176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.407 { 00:19:27.407 "name": "ftl0", 00:19:27.407 "uuid": "4eaa5371-a878-4f7e-9799-2a3c6c0cfb36" 00:19:27.407 } 00:19:27.407 00:42:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.407 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.665 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.665 [ 00:19:27.665 { 00:19:27.665 "name": "ftl0", 00:19:27.665 "aliases": [ 00:19:27.665 "4eaa5371-a878-4f7e-9799-2a3c6c0cfb36" 00:19:27.665 ], 00:19:27.665 "product_name": "FTL disk", 00:19:27.665 "block_size": 4096, 00:19:27.665 "num_blocks": 23592960, 00:19:27.665 "uuid": "4eaa5371-a878-4f7e-9799-2a3c6c0cfb36", 00:19:27.665 "assigned_rate_limits": { 00:19:27.665 "rw_ios_per_sec": 0, 00:19:27.665 "rw_mbytes_per_sec": 0, 00:19:27.665 "r_mbytes_per_sec": 0, 00:19:27.665 "w_mbytes_per_sec": 0 00:19:27.665 }, 00:19:27.665 "claimed": false, 00:19:27.665 "zoned": false, 00:19:27.665 "supported_io_types": { 00:19:27.665 "read": true, 00:19:27.665 "write": true, 00:19:27.665 "unmap": true, 00:19:27.665 "flush": true, 00:19:27.665 "reset": false, 00:19:27.665 "nvme_admin": false, 00:19:27.665 "nvme_io": false, 00:19:27.665 "nvme_io_md": false, 00:19:27.665 "write_zeroes": true, 00:19:27.665 "zcopy": false, 00:19:27.665 "get_zone_info": false, 00:19:27.665 "zone_management": false, 00:19:27.665 "zone_append": false, 00:19:27.665 "compare": false, 00:19:27.665 "compare_and_write": false, 00:19:27.665 "abort": false, 00:19:27.665 "seek_hole": false, 00:19:27.665 "seek_data": false, 00:19:27.665 "copy": false, 00:19:27.665 "nvme_iov_md": false 00:19:27.665 }, 00:19:27.665 "driver_specific": { 00:19:27.665 "ftl": { 00:19:27.665 "base_bdev": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:27.665 "cache": "nvc0n1p0" 00:19:27.665 } 00:19:27.665 } 00:19:27.665 } 00:19:27.665 ] 00:19:27.665 00:42:04 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:27.665 00:42:04 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.665 00:42:04 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:27.924 00:42:04 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:27.924 00:42:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:28.182 00:42:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:28.182 { 00:19:28.182 "name": "ftl0", 00:19:28.182 "aliases": [ 00:19:28.182 "4eaa5371-a878-4f7e-9799-2a3c6c0cfb36" 00:19:28.182 ], 00:19:28.182 "product_name": "FTL disk", 00:19:28.182 "block_size": 4096, 00:19:28.182 "num_blocks": 23592960, 00:19:28.182 "uuid": "4eaa5371-a878-4f7e-9799-2a3c6c0cfb36", 00:19:28.182 "assigned_rate_limits": { 00:19:28.182 "rw_ios_per_sec": 0, 00:19:28.182 "rw_mbytes_per_sec": 0, 00:19:28.182 "r_mbytes_per_sec": 0, 00:19:28.182 "w_mbytes_per_sec": 0 00:19:28.182 }, 00:19:28.182 "claimed": false, 00:19:28.182 "zoned": false, 00:19:28.182 "supported_io_types": { 00:19:28.182 "read": true, 00:19:28.182 "write": true, 00:19:28.182 "unmap": true, 00:19:28.182 "flush": true, 00:19:28.182 "reset": false, 00:19:28.182 "nvme_admin": false, 00:19:28.182 "nvme_io": false, 00:19:28.182 "nvme_io_md": false, 00:19:28.182 "write_zeroes": true, 00:19:28.182 "zcopy": false, 00:19:28.182 "get_zone_info": false, 00:19:28.182 "zone_management": false, 00:19:28.182 "zone_append": false, 00:19:28.182 "compare": false, 00:19:28.182 "compare_and_write": false, 00:19:28.182 "abort": false, 00:19:28.182 "seek_hole": false, 00:19:28.182 "seek_data": false, 00:19:28.182 "copy": false, 00:19:28.182 "nvme_iov_md": false 00:19:28.182 }, 00:19:28.182 "driver_specific": { 00:19:28.182 "ftl": { 00:19:28.182 "base_bdev": "ab997be7-7519-4581-8242-c38c56020dbd", 00:19:28.182 "cache": "nvc0n1p0" 00:19:28.182 } 00:19:28.182 } 00:19:28.182 } 00:19:28.182 ]' 00:19:28.182 00:42:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:28.182 00:42:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:28.182 00:42:04 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:28.442 [2024-11-27 00:42:05.025302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.025345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.442 [2024-11-27 00:42:05.025359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.442 [2024-11-27 00:42:05.025368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.025405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.442 [2024-11-27 00:42:05.025844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.025885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.442 [2024-11-27 00:42:05.025894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:19:28.442 [2024-11-27 00:42:05.025906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.026503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.026528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.442 [2024-11-27 00:42:05.026536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:19:28.442 [2024-11-27 00:42:05.026548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.030218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.030242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.442 [2024-11-27 00:42:05.030252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:19:28.442 [2024-11-27 00:42:05.030261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.037156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.037193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.442 [2024-11-27 00:42:05.037213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:19:28.442 [2024-11-27 00:42:05.037224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.039025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.039062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.442 [2024-11-27 00:42:05.039071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:19:28.442 [2024-11-27 00:42:05.039080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.043177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.043216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.442 [2024-11-27 00:42:05.043237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:19:28.442 [2024-11-27 00:42:05.043248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.043442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.043468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.442 [2024-11-27 00:42:05.043477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:28.442 [2024-11-27 00:42:05.043486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.442 [2024-11-27 00:42:05.045348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.442 [2024-11-27 00:42:05.045385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.442 [2024-11-27 00:42:05.045393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:19:28.443 [2024-11-27 00:42:05.045404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.443 [2024-11-27 00:42:05.046843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.443 [2024-11-27 00:42:05.046889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.443 [2024-11-27 00:42:05.046898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:19:28.443 [2024-11-27 00:42:05.046906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.443 [2024-11-27 00:42:05.048020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.443 [2024-11-27 00:42:05.048055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.443 [2024-11-27 00:42:05.048064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:19:28.443 [2024-11-27 00:42:05.048073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.443 [2024-11-27 00:42:05.049082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.443 [2024-11-27 00:42:05.049117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.443 [2024-11-27 00:42:05.049126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:19:28.443 [2024-11-27 00:42:05.049134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.443 [2024-11-27 00:42:05.049181] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.443 [2024-11-27 00:42:05.049196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.443 [2024-11-27 00:42:05.049869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.049992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.444 [2024-11-27 00:42:05.050061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.444 [2024-11-27 00:42:05.050073] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:19:28.444 [2024-11-27 00:42:05.050092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.444 [2024-11-27 00:42:05.050101] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.444 [2024-11-27 00:42:05.050110] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.444 [2024-11-27 00:42:05.050117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.444 [2024-11-27 00:42:05.050125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.444 [2024-11-27 00:42:05.050132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.444 [2024-11-27 00:42:05.050141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.444 [2024-11-27 00:42:05.050148] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.444 [2024-11-27 00:42:05.050157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.444 [2024-11-27 00:42:05.050164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.444 [2024-11-27 00:42:05.050173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.444 [2024-11-27 00:42:05.050181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:19:28.444 [2024-11-27 00:42:05.050191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.051717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.444 [2024-11-27 00:42:05.051742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.444 [2024-11-27 00:42:05.051751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:19:28.444 [2024-11-27 00:42:05.051770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.051879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.444 [2024-11-27 00:42:05.051889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.444 [2024-11-27 00:42:05.051898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:28.444 [2024-11-27 00:42:05.051906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.057218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.057250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.444 [2024-11-27 00:42:05.057260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.057269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.057362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.057380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.444 [2024-11-27 00:42:05.057388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.057399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.057456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.057471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.444 [2024-11-27 00:42:05.057479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.057488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.057534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.057556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.444 [2024-11-27 00:42:05.057564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.057572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.067023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.067066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.444 [2024-11-27 00:42:05.067076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.067085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.074822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.074873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.444 [2024-11-27 00:42:05.074884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.074895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.074968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.074987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.444 [2024-11-27 00:42:05.074995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.075085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.444 [2024-11-27 00:42:05.075103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.075220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.444 [2024-11-27 00:42:05.075230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.075299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.444 [2024-11-27 00:42:05.075307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.075374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.444 [2024-11-27 00:42:05.075395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.444 [2024-11-27 00:42:05.075478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.444 [2024-11-27 00:42:05.075487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.444 [2024-11-27 00:42:05.075506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.444 [2024-11-27 00:42:05.075693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.379 ms, result 0 00:19:28.444 true 00:19:28.444 00:42:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87709 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87709 ']' 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87709 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87709 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.444 killing process with pid 87709 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87709' 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87709 00:19:28.444 00:42:05 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87709 00:19:33.723 00:42:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:33.723 65536+0 records in 00:19:33.723 65536+0 records out 00:19:33.723 268435456 bytes (268 MB, 256 MiB) copied, 0.817042 s, 329 MB/s 00:19:33.723 00:42:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:33.723 [2024-11-27 00:42:10.443580] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:19:33.723 [2024-11-27 00:42:10.443705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87869 ] 00:19:33.983 [2024-11-27 00:42:10.602133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.983 [2024-11-27 00:42:10.626812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.983 [2024-11-27 00:42:10.736308] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.983 [2024-11-27 00:42:10.736395] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.245 [2024-11-27 00:42:10.897651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.897724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.245 [2024-11-27 00:42:10.897741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:34.245 [2024-11-27 00:42:10.897750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.900318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.900370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.245 [2024-11-27 00:42:10.900382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.545 ms 00:19:34.245 [2024-11-27 00:42:10.900393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.900503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.245 [2024-11-27 00:42:10.900780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.245 [2024-11-27 00:42:10.900813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.900827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.245 [2024-11-27 00:42:10.900838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:19:34.245 [2024-11-27 00:42:10.900847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.902768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:34.245 [2024-11-27 00:42:10.906582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.906643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:34.245 [2024-11-27 00:42:10.906659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:19:34.245 [2024-11-27 00:42:10.906667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.906747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.906758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:34.245 [2024-11-27 00:42:10.906768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:34.245 [2024-11-27 00:42:10.906776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.914896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.914941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.245 [2024-11-27 00:42:10.914953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.049 ms 00:19:34.245 [2024-11-27 00:42:10.914961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.915104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.915116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.245 [2024-11-27 00:42:10.915126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:34.245 [2024-11-27 00:42:10.915137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.915165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.915173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.245 [2024-11-27 00:42:10.915182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:34.245 [2024-11-27 00:42:10.915189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.915212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:34.245 [2024-11-27 00:42:10.917275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.917309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.245 [2024-11-27 00:42:10.917320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.069 ms 00:19:34.245 [2024-11-27 00:42:10.917333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.245 [2024-11-27 00:42:10.917380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.245 [2024-11-27 00:42:10.917390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.245 [2024-11-27 00:42:10.917399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:34.245 [2024-11-27 00:42:10.917407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.246 [2024-11-27 00:42:10.917426] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:34.246 [2024-11-27 00:42:10.917452] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:34.246 [2024-11-27 00:42:10.917496] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:34.246 [2024-11-27 00:42:10.917516] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:34.246 [2024-11-27 00:42:10.917621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.246 [2024-11-27 00:42:10.917631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.246 [2024-11-27 00:42:10.917642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:34.246 [2024-11-27 00:42:10.917654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.246 [2024-11-27 00:42:10.917664] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.246 [2024-11-27 00:42:10.917673] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:34.246 [2024-11-27 00:42:10.917681] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.246 [2024-11-27 00:42:10.917690] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.246 [2024-11-27 00:42:10.917703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.246 [2024-11-27 00:42:10.917711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.246 [2024-11-27 00:42:10.917719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.246 [2024-11-27 00:42:10.917728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:19:34.246 [2024-11-27 00:42:10.917736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.246 [2024-11-27 00:42:10.917826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.246 [2024-11-27 00:42:10.917846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.246 [2024-11-27 00:42:10.917870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:34.246 [2024-11-27 00:42:10.917878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.246 [2024-11-27 00:42:10.917981] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.246 [2024-11-27 00:42:10.917999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.246 [2024-11-27 00:42:10.918009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.246 [2024-11-27 00:42:10.918036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.246 [2024-11-27 00:42:10.918066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.246 [2024-11-27 00:42:10.918082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.246 [2024-11-27 00:42:10.918090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:34.246 [2024-11-27 00:42:10.918099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.246 [2024-11-27 00:42:10.918106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.246 [2024-11-27 00:42:10.918114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:34.246 [2024-11-27 00:42:10.918123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.246 [2024-11-27 00:42:10.918139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.246 [2024-11-27 00:42:10.918165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.246 [2024-11-27 00:42:10.918196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.246 [2024-11-27 00:42:10.918246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.246 [2024-11-27 00:42:10.918271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.246 [2024-11-27 00:42:10.918295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.246 [2024-11-27 00:42:10.918312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.246 [2024-11-27 00:42:10.918320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:34.246 [2024-11-27 00:42:10.918328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.246 [2024-11-27 00:42:10.918337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.246 [2024-11-27 00:42:10.918345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:34.246 [2024-11-27 00:42:10.918356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.246 [2024-11-27 00:42:10.918372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:34.246 [2024-11-27 00:42:10.918381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.246 [2024-11-27 00:42:10.918399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.246 [2024-11-27 00:42:10.918407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.246 [2024-11-27 00:42:10.918430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.246 [2024-11-27 00:42:10.918438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.246 [2024-11-27 00:42:10.918446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.246 [2024-11-27 00:42:10.918457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.246 [2024-11-27 00:42:10.918466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.246 [2024-11-27 00:42:10.918474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.246 [2024-11-27 00:42:10.918484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.246 [2024-11-27 00:42:10.918496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:34.246 [2024-11-27 00:42:10.918518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:34.246 [2024-11-27 00:42:10.918525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:34.246 [2024-11-27 00:42:10.918533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:34.246 [2024-11-27 00:42:10.918541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:34.246 [2024-11-27 00:42:10.918548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:34.246 [2024-11-27 00:42:10.918555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:34.246 [2024-11-27 00:42:10.918562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:34.246 [2024-11-27 00:42:10.918569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:34.246 [2024-11-27 00:42:10.918578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:34.246 [2024-11-27 00:42:10.918615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.246 [2024-11-27 00:42:10.918626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.246 [2024-11-27 00:42:10.918647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.246 [2024-11-27 00:42:10.918655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.247 [2024-11-27 00:42:10.918664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.247 [2024-11-27 00:42:10.918672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.918679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.247 [2024-11-27 00:42:10.918686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:19:34.247 [2024-11-27 00:42:10.918694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.931970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.932018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.247 [2024-11-27 00:42:10.932030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.224 ms 00:19:34.247 [2024-11-27 00:42:10.932038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.932173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.932184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.247 [2024-11-27 00:42:10.932193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:34.247 [2024-11-27 00:42:10.932201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.963363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.963429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.247 [2024-11-27 00:42:10.963442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.139 ms 00:19:34.247 [2024-11-27 00:42:10.963450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.963552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.963565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.247 [2024-11-27 00:42:10.963575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.247 [2024-11-27 00:42:10.963583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.964161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.964201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.247 [2024-11-27 00:42:10.964214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:34.247 [2024-11-27 00:42:10.964223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.964393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.964408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.247 [2024-11-27 00:42:10.964418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:34.247 [2024-11-27 00:42:10.964431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.972741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.972792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.247 [2024-11-27 00:42:10.972810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.283 ms 00:19:34.247 [2024-11-27 00:42:10.972818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.976737] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:34.247 [2024-11-27 00:42:10.976792] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:34.247 [2024-11-27 00:42:10.976805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.976814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:34.247 [2024-11-27 00:42:10.976823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:19:34.247 [2024-11-27 00:42:10.976832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.993236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.993291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:34.247 [2024-11-27 00:42:10.993304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.305 ms 00:19:34.247 [2024-11-27 00:42:10.993312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.996611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.996665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:34.247 [2024-11-27 00:42:10.996676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:19:34.247 [2024-11-27 00:42:10.996683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.999140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.999193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:34.247 [2024-11-27 00:42:10.999203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:19:34.247 [2024-11-27 00:42:10.999211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:10.999566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:10.999604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.247 [2024-11-27 00:42:10.999619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:34.247 [2024-11-27 00:42:10.999627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.247 [2024-11-27 00:42:11.026580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.247 [2024-11-27 00:42:11.026645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:34.247 [2024-11-27 00:42:11.026658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.927 ms 00:19:34.247 [2024-11-27 00:42:11.026667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.035051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:34.509 [2024-11-27 00:42:11.056247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.056305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:34.509 [2024-11-27 00:42:11.056319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.476 ms 00:19:34.509 [2024-11-27 00:42:11.056329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.056431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.056442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:34.509 [2024-11-27 00:42:11.056453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:34.509 [2024-11-27 00:42:11.056465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.056523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.056533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:34.509 [2024-11-27 00:42:11.056542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:34.509 [2024-11-27 00:42:11.056550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.056580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.056589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.509 [2024-11-27 00:42:11.056598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:34.509 [2024-11-27 00:42:11.056606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.056646] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:34.509 [2024-11-27 00:42:11.056657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.056666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:34.509 [2024-11-27 00:42:11.056675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:34.509 [2024-11-27 00:42:11.056684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.062767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.062827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.509 [2024-11-27 00:42:11.062838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.057 ms 00:19:34.509 [2024-11-27 00:42:11.062847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.062972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.509 [2024-11-27 00:42:11.062984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.509 [2024-11-27 00:42:11.062994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:34.509 [2024-11-27 00:42:11.063002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.509 [2024-11-27 00:42:11.063984] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.509 [2024-11-27 00:42:11.065286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.017 ms, result 0 00:19:34.509 [2024-11-27 00:42:11.066522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.509 [2024-11-27 00:42:11.073935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.454  [2024-11-27T00:42:13.183Z] Copying: 16/256 [MB] (16 MBps) [2024-11-27T00:42:14.124Z] Copying: 42/256 [MB] (25 MBps) [2024-11-27T00:42:15.508Z] Copying: 65/256 [MB] (23 MBps) [2024-11-27T00:42:16.080Z] Copying: 79/256 [MB] (13 MBps) [2024-11-27T00:42:17.470Z] Copying: 98/256 [MB] (19 MBps) [2024-11-27T00:42:18.414Z] Copying: 122/256 [MB] (23 MBps) [2024-11-27T00:42:19.357Z] Copying: 145/256 [MB] (23 MBps) [2024-11-27T00:42:20.302Z] Copying: 160/256 [MB] (14 MBps) [2024-11-27T00:42:21.244Z] Copying: 176/256 [MB] (16 MBps) [2024-11-27T00:42:22.189Z] Copying: 198/256 [MB] (21 MBps) [2024-11-27T00:42:23.134Z] Copying: 226/256 [MB] (27 MBps) [2024-11-27T00:42:24.078Z] Copying: 246/256 [MB] (20 MBps) [2024-11-27T00:42:24.078Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-27 00:42:23.869044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.291 [2024-11-27 00:42:23.870911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.870973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.291 [2024-11-27 00:42:23.870988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.291 [2024-11-27 00:42:23.870997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.871021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.291 [2024-11-27 00:42:23.871695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.871742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.291 [2024-11-27 00:42:23.871755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:19:47.291 [2024-11-27 00:42:23.871764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.874134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.874184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.291 [2024-11-27 00:42:23.874196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:19:47.291 [2024-11-27 00:42:23.874212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.881445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.881492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.291 [2024-11-27 00:42:23.881503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:19:47.291 [2024-11-27 00:42:23.881520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.888503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.888546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.291 [2024-11-27 00:42:23.888569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:19:47.291 [2024-11-27 00:42:23.888581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.890842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.890910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.291 [2024-11-27 00:42:23.890920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:19:47.291 [2024-11-27 00:42:23.890929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.896155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.896218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.291 [2024-11-27 00:42:23.896229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:19:47.291 [2024-11-27 00:42:23.896238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.896373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.896385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.291 [2024-11-27 00:42:23.896393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:47.291 [2024-11-27 00:42:23.896419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.899357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.899409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.291 [2024-11-27 00:42:23.899419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:19:47.291 [2024-11-27 00:42:23.899427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.901620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.901669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.291 [2024-11-27 00:42:23.901678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:19:47.291 [2024-11-27 00:42:23.901686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.903456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.903506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.291 [2024-11-27 00:42:23.903517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:19:47.291 [2024-11-27 00:42:23.903526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.905267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.291 [2024-11-27 00:42:23.905316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.291 [2024-11-27 00:42:23.905326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:19:47.291 [2024-11-27 00:42:23.905333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.291 [2024-11-27 00:42:23.905376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.291 [2024-11-27 00:42:23.905391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.291 [2024-11-27 00:42:23.905562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.905997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.292 [2024-11-27 00:42:23.906239] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.292 [2024-11-27 00:42:23.906250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:19:47.292 [2024-11-27 00:42:23.906258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.292 [2024-11-27 00:42:23.906267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.292 [2024-11-27 00:42:23.906274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.292 [2024-11-27 00:42:23.906282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.292 [2024-11-27 00:42:23.906296] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.292 [2024-11-27 00:42:23.906304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.292 [2024-11-27 00:42:23.906318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.292 [2024-11-27 00:42:23.906325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.292 [2024-11-27 00:42:23.906332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.292 [2024-11-27 00:42:23.906339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.292 [2024-11-27 00:42:23.906349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.292 [2024-11-27 00:42:23.906359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:19:47.292 [2024-11-27 00:42:23.906372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.908689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.293 [2024-11-27 00:42:23.908729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.293 [2024-11-27 00:42:23.908740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:19:47.293 [2024-11-27 00:42:23.908749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.908904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.293 [2024-11-27 00:42:23.908916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.293 [2024-11-27 00:42:23.908926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:47.293 [2024-11-27 00:42:23.908934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.916927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.916979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.293 [2024-11-27 00:42:23.916990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.917005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.917069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.917079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.293 [2024-11-27 00:42:23.917087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.917105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.917158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.917170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.293 [2024-11-27 00:42:23.917178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.917186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.917209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.917217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.293 [2024-11-27 00:42:23.917229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.917236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.930315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.930369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.293 [2024-11-27 00:42:23.930381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.930390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.293 [2024-11-27 00:42:23.940399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.293 [2024-11-27 00:42:23.940477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.293 [2024-11-27 00:42:23.940536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.293 [2024-11-27 00:42:23.940639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.293 [2024-11-27 00:42:23.940699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.293 [2024-11-27 00:42:23.940769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.940821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.293 [2024-11-27 00:42:23.940835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.293 [2024-11-27 00:42:23.940843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.293 [2024-11-27 00:42:23.940869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.293 [2024-11-27 00:42:23.941018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.081 ms, result 0 00:19:47.555 00:19:47.555 00:19:47.555 00:42:24 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=88021 00:19:47.555 00:42:24 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 88021 00:19:47.555 00:42:24 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88021 ']' 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.555 00:42:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:47.555 [2024-11-27 00:42:24.261450] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:19:47.555 [2024-11-27 00:42:24.261587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88021 ] 00:19:47.816 [2024-11-27 00:42:24.417094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.816 [2024-11-27 00:42:24.446306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.387 00:42:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.387 00:42:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:48.387 00:42:25 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:48.648 [2024-11-27 00:42:25.313742] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.648 [2024-11-27 00:42:25.313797] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.911 [2024-11-27 00:42:25.476432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.476472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.911 [2024-11-27 00:42:25.476485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.911 [2024-11-27 00:42:25.476493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.478262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.478292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.911 [2024-11-27 00:42:25.478299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:19:48.911 [2024-11-27 00:42:25.478306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.478360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.911 [2024-11-27 00:42:25.478536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.911 [2024-11-27 00:42:25.478557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.478567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.911 [2024-11-27 00:42:25.478573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:19:48.911 [2024-11-27 00:42:25.478580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.479690] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:48.911 [2024-11-27 00:42:25.481721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.481750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:48.911 [2024-11-27 00:42:25.481759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:19:48.911 [2024-11-27 00:42:25.481766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.481812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.481819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:48.911 [2024-11-27 00:42:25.481828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:48.911 [2024-11-27 00:42:25.481834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.486278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.486306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.911 [2024-11-27 00:42:25.486315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:19:48.911 [2024-11-27 00:42:25.486321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.486395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.486402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.911 [2024-11-27 00:42:25.486411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:48.911 [2024-11-27 00:42:25.486417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.486439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.486448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.911 [2024-11-27 00:42:25.486458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:48.911 [2024-11-27 00:42:25.486464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.486482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:48.911 [2024-11-27 00:42:25.487637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.487665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.911 [2024-11-27 00:42:25.487673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:19:48.911 [2024-11-27 00:42:25.487680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.911 [2024-11-27 00:42:25.487706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.911 [2024-11-27 00:42:25.487714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.912 [2024-11-27 00:42:25.487723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.912 [2024-11-27 00:42:25.487729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.912 [2024-11-27 00:42:25.487744] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:48.912 [2024-11-27 00:42:25.487758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:48.912 [2024-11-27 00:42:25.487787] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:48.912 [2024-11-27 00:42:25.487805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:48.912 [2024-11-27 00:42:25.487899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.912 [2024-11-27 00:42:25.487911] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.912 [2024-11-27 00:42:25.487920] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.912 [2024-11-27 00:42:25.487928] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.912 [2024-11-27 00:42:25.487935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.912 [2024-11-27 00:42:25.487944] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:48.912 [2024-11-27 00:42:25.487949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.912 [2024-11-27 00:42:25.487960] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.912 [2024-11-27 00:42:25.487966] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.912 [2024-11-27 00:42:25.487973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.912 [2024-11-27 00:42:25.487978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.912 [2024-11-27 00:42:25.487985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:48.912 [2024-11-27 00:42:25.487991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.912 [2024-11-27 00:42:25.488058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.912 [2024-11-27 00:42:25.488070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.912 [2024-11-27 00:42:25.488078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:48.912 [2024-11-27 00:42:25.488083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.912 [2024-11-27 00:42:25.488162] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.912 [2024-11-27 00:42:25.488174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.912 [2024-11-27 00:42:25.488182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.912 [2024-11-27 00:42:25.488208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.912 [2024-11-27 00:42:25.488231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.912 [2024-11-27 00:42:25.488243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.912 [2024-11-27 00:42:25.488248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:48.912 [2024-11-27 00:42:25.488254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.912 [2024-11-27 00:42:25.488259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.912 [2024-11-27 00:42:25.488266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:48.912 [2024-11-27 00:42:25.488271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.912 [2024-11-27 00:42:25.488282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.912 [2024-11-27 00:42:25.488304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.912 [2024-11-27 00:42:25.488323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.912 [2024-11-27 00:42:25.488344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.912 [2024-11-27 00:42:25.488363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.912 [2024-11-27 00:42:25.488384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.912 [2024-11-27 00:42:25.488397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.912 [2024-11-27 00:42:25.488403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:48.912 [2024-11-27 00:42:25.488411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.912 [2024-11-27 00:42:25.488417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.912 [2024-11-27 00:42:25.488424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:48.912 [2024-11-27 00:42:25.488430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.912 [2024-11-27 00:42:25.488443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:48.912 [2024-11-27 00:42:25.488451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488457] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.912 [2024-11-27 00:42:25.488465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.912 [2024-11-27 00:42:25.488471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.912 [2024-11-27 00:42:25.488487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.912 [2024-11-27 00:42:25.488495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.912 [2024-11-27 00:42:25.488500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.912 [2024-11-27 00:42:25.488508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.912 [2024-11-27 00:42:25.488513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.912 [2024-11-27 00:42:25.488521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.912 [2024-11-27 00:42:25.488529] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.912 [2024-11-27 00:42:25.488539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:48.912 [2024-11-27 00:42:25.488555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:48.912 [2024-11-27 00:42:25.488562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:48.912 [2024-11-27 00:42:25.488570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:48.912 [2024-11-27 00:42:25.488576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:48.912 [2024-11-27 00:42:25.488583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:48.912 [2024-11-27 00:42:25.488589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:48.912 [2024-11-27 00:42:25.488597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:48.912 [2024-11-27 00:42:25.488603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:48.912 [2024-11-27 00:42:25.488611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:48.912 [2024-11-27 00:42:25.488650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.912 [2024-11-27 00:42:25.488658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.912 [2024-11-27 00:42:25.488674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.912 [2024-11-27 00:42:25.488680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.912 [2024-11-27 00:42:25.488687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.913 [2024-11-27 00:42:25.488692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.488699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.913 [2024-11-27 00:42:25.488709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:19:48.913 [2024-11-27 00:42:25.488717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.496522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.496550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.913 [2024-11-27 00:42:25.496559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.765 ms 00:19:48.913 [2024-11-27 00:42:25.496566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.496656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.496666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.913 [2024-11-27 00:42:25.496672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:48.913 [2024-11-27 00:42:25.496679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.504020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.504050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.913 [2024-11-27 00:42:25.504058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.326 ms 00:19:48.913 [2024-11-27 00:42:25.504066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.504099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.504107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.913 [2024-11-27 00:42:25.504113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:48.913 [2024-11-27 00:42:25.504120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.504403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.504428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.913 [2024-11-27 00:42:25.504438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:48.913 [2024-11-27 00:42:25.504445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.504550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.504564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.913 [2024-11-27 00:42:25.504571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:48.913 [2024-11-27 00:42:25.504579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.509332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.509361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.913 [2024-11-27 00:42:25.509368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:19:48.913 [2024-11-27 00:42:25.509376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.519885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:48.913 [2024-11-27 00:42:25.519939] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:48.913 [2024-11-27 00:42:25.519957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.519970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:48.913 [2024-11-27 00:42:25.519983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.503 ms 00:19:48.913 [2024-11-27 00:42:25.519995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.535622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.535655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:48.913 [2024-11-27 00:42:25.535664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.549 ms 00:19:48.913 [2024-11-27 00:42:25.535674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.537302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.537332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:48.913 [2024-11-27 00:42:25.537339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:19:48.913 [2024-11-27 00:42:25.537347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.538710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.538741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:48.913 [2024-11-27 00:42:25.538748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:19:48.913 [2024-11-27 00:42:25.538755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.539005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.539022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.913 [2024-11-27 00:42:25.539029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:48.913 [2024-11-27 00:42:25.539036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.554501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.554536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:48.913 [2024-11-27 00:42:25.554546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.449 ms 00:19:48.913 [2024-11-27 00:42:25.554556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.560438] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:48.913 [2024-11-27 00:42:25.571819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.571848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.913 [2024-11-27 00:42:25.571866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.215 ms 00:19:48.913 [2024-11-27 00:42:25.571872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.571943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.571951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:48.913 [2024-11-27 00:42:25.571961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.913 [2024-11-27 00:42:25.571971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.572009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.572016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.913 [2024-11-27 00:42:25.572023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:48.913 [2024-11-27 00:42:25.572029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.572048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.572057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.913 [2024-11-27 00:42:25.572066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.913 [2024-11-27 00:42:25.572072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.572096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:48.913 [2024-11-27 00:42:25.572103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.572109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:48.913 [2024-11-27 00:42:25.572115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.913 [2024-11-27 00:42:25.572122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.575790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.575822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.913 [2024-11-27 00:42:25.575831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:19:48.913 [2024-11-27 00:42:25.575839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.575903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.913 [2024-11-27 00:42:25.575912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.913 [2024-11-27 00:42:25.575919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:48.913 [2024-11-27 00:42:25.575926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.913 [2024-11-27 00:42:25.576629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.913 [2024-11-27 00:42:25.577456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.997 ms, result 0 00:19:48.913 [2024-11-27 00:42:25.578509] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.913 Some configs were skipped because the RPC state that can call them passed over. 00:19:48.913 00:42:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:49.175 [2024-11-27 00:42:25.793747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.175 [2024-11-27 00:42:25.793777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:49.175 [2024-11-27 00:42:25.793787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:19:49.175 [2024-11-27 00:42:25.793793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.175 [2024-11-27 00:42:25.793818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.374 ms, result 0 00:19:49.175 true 00:19:49.175 00:42:25 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:49.438 [2024-11-27 00:42:25.993969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:25.994002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:49.438 [2024-11-27 00:42:25.994010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:19:49.438 [2024-11-27 00:42:25.994017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:25.994042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.412 ms, result 0 00:19:49.438 true 00:19:49.438 00:42:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 88021 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88021 ']' 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88021 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88021 00:19:49.438 killing process with pid 88021 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88021' 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88021 00:19:49.438 00:42:26 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88021 00:19:49.438 [2024-11-27 00:42:26.121165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.121211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.438 [2024-11-27 00:42:26.121225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:49.438 [2024-11-27 00:42:26.121234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.121258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:49.438 [2024-11-27 00:42:26.121656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.121706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.438 [2024-11-27 00:42:26.121714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:19:49.438 [2024-11-27 00:42:26.121722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.121955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.121973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.438 [2024-11-27 00:42:26.121981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:19:49.438 [2024-11-27 00:42:26.121989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.125499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.125539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.438 [2024-11-27 00:42:26.125550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.496 ms 00:19:49.438 [2024-11-27 00:42:26.125557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.130706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.130734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.438 [2024-11-27 00:42:26.130742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:19:49.438 [2024-11-27 00:42:26.130751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.132918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.132948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.438 [2024-11-27 00:42:26.132954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:19:49.438 [2024-11-27 00:42:26.132961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.137027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.137060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.438 [2024-11-27 00:42:26.137067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.039 ms 00:19:49.438 [2024-11-27 00:42:26.137075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.137168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.137176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.438 [2024-11-27 00:42:26.137182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:49.438 [2024-11-27 00:42:26.137189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.139976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.140021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:49.438 [2024-11-27 00:42:26.140030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.772 ms 00:19:49.438 [2024-11-27 00:42:26.140039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.142013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.142044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:49.438 [2024-11-27 00:42:26.142051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:19:49.438 [2024-11-27 00:42:26.142058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.143813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.143846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.438 [2024-11-27 00:42:26.143861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:19:49.438 [2024-11-27 00:42:26.143868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.145591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.438 [2024-11-27 00:42:26.145621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.438 [2024-11-27 00:42:26.145627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:19:49.438 [2024-11-27 00:42:26.145634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.438 [2024-11-27 00:42:26.145659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.438 [2024-11-27 00:42:26.145673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.438 [2024-11-27 00:42:26.145728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.145995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.439 [2024-11-27 00:42:26.146332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.440 [2024-11-27 00:42:26.146339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.440 [2024-11-27 00:42:26.146345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.440 [2024-11-27 00:42:26.146353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.440 [2024-11-27 00:42:26.146360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.440 [2024-11-27 00:42:26.146373] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.440 [2024-11-27 00:42:26.146381] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:19:49.440 [2024-11-27 00:42:26.146388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.440 [2024-11-27 00:42:26.146394] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.440 [2024-11-27 00:42:26.146400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.440 [2024-11-27 00:42:26.146408] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.440 [2024-11-27 00:42:26.146415] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.440 [2024-11-27 00:42:26.146424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.440 [2024-11-27 00:42:26.146431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.440 [2024-11-27 00:42:26.146436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.440 [2024-11-27 00:42:26.146442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.440 [2024-11-27 00:42:26.146448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.440 [2024-11-27 00:42:26.146456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.440 [2024-11-27 00:42:26.146462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:19:49.440 [2024-11-27 00:42:26.146470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.147700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.440 [2024-11-27 00:42:26.147726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.440 [2024-11-27 00:42:26.147734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.212 ms 00:19:49.440 [2024-11-27 00:42:26.147744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.147824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.440 [2024-11-27 00:42:26.147833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.440 [2024-11-27 00:42:26.147840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:49.440 [2024-11-27 00:42:26.147848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.152378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.152408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.440 [2024-11-27 00:42:26.152419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.152426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.152486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.152498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.440 [2024-11-27 00:42:26.152504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.152514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.152542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.152551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.440 [2024-11-27 00:42:26.152557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.152564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.152577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.152585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.440 [2024-11-27 00:42:26.152592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.152599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.160736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.160773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.440 [2024-11-27 00:42:26.160781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.160789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.166948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.166983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.440 [2024-11-27 00:42:26.166990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.440 [2024-11-27 00:42:26.167061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.440 [2024-11-27 00:42:26.167105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.440 [2024-11-27 00:42:26.167177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.440 [2024-11-27 00:42:26.167223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.440 [2024-11-27 00:42:26.167278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.440 [2024-11-27 00:42:26.167336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.440 [2024-11-27 00:42:26.167343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.440 [2024-11-27 00:42:26.167350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.440 [2024-11-27 00:42:26.167449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.270 ms, result 0 00:19:49.702 00:42:26 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:49.702 00:42:26 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.702 [2024-11-27 00:42:26.390422] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:19:49.702 [2024-11-27 00:42:26.390548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88057 ] 00:19:49.963 [2024-11-27 00:42:26.543941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.963 [2024-11-27 00:42:26.564958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.963 [2024-11-27 00:42:26.649935] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.963 [2024-11-27 00:42:26.649992] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.227 [2024-11-27 00:42:26.796474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.227 [2024-11-27 00:42:26.796511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.227 [2024-11-27 00:42:26.796521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.227 [2024-11-27 00:42:26.796527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.227 [2024-11-27 00:42:26.798265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.798292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.228 [2024-11-27 00:42:26.798299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:19:50.228 [2024-11-27 00:42:26.798305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.798360] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.228 [2024-11-27 00:42:26.798571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.228 [2024-11-27 00:42:26.798591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.798597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.228 [2024-11-27 00:42:26.798604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:19:50.228 [2024-11-27 00:42:26.798609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.799571] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.228 [2024-11-27 00:42:26.801641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.801669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.228 [2024-11-27 00:42:26.801678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.070 ms 00:19:50.228 [2024-11-27 00:42:26.801687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.801732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.801739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.228 [2024-11-27 00:42:26.801746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:50.228 [2024-11-27 00:42:26.801751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.806417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.806444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.228 [2024-11-27 00:42:26.806452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:19:50.228 [2024-11-27 00:42:26.806458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.806544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.806556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.228 [2024-11-27 00:42:26.806563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:50.228 [2024-11-27 00:42:26.806571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.806588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.806599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.228 [2024-11-27 00:42:26.806605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.228 [2024-11-27 00:42:26.806611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.806627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.228 [2024-11-27 00:42:26.807784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.807808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.228 [2024-11-27 00:42:26.807816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:19:50.228 [2024-11-27 00:42:26.807824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.807864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.807871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.228 [2024-11-27 00:42:26.807879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:50.228 [2024-11-27 00:42:26.807885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.807898] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.228 [2024-11-27 00:42:26.807911] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:50.228 [2024-11-27 00:42:26.807940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.228 [2024-11-27 00:42:26.807954] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:50.228 [2024-11-27 00:42:26.808032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:50.228 [2024-11-27 00:42:26.808047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.228 [2024-11-27 00:42:26.808055] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:50.228 [2024-11-27 00:42:26.808063] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808078] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.228 [2024-11-27 00:42:26.808084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.228 [2024-11-27 00:42:26.808089] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:50.228 [2024-11-27 00:42:26.808097] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:50.228 [2024-11-27 00:42:26.808104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.808110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.228 [2024-11-27 00:42:26.808118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:19:50.228 [2024-11-27 00:42:26.808124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.808189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.228 [2024-11-27 00:42:26.808196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.228 [2024-11-27 00:42:26.808202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:50.228 [2024-11-27 00:42:26.808208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.228 [2024-11-27 00:42:26.808282] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.228 [2024-11-27 00:42:26.808298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.228 [2024-11-27 00:42:26.808305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.228 [2024-11-27 00:42:26.808323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.228 [2024-11-27 00:42:26.808342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.228 [2024-11-27 00:42:26.808352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.228 [2024-11-27 00:42:26.808358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.228 [2024-11-27 00:42:26.808363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.228 [2024-11-27 00:42:26.808368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.228 [2024-11-27 00:42:26.808374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:50.228 [2024-11-27 00:42:26.808379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.228 [2024-11-27 00:42:26.808391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.228 [2024-11-27 00:42:26.808407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.228 [2024-11-27 00:42:26.808422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.228 [2024-11-27 00:42:26.808441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.228 [2024-11-27 00:42:26.808459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.228 [2024-11-27 00:42:26.808470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.228 [2024-11-27 00:42:26.808476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:50.228 [2024-11-27 00:42:26.808481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.228 [2024-11-27 00:42:26.808487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.228 [2024-11-27 00:42:26.808492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:50.229 [2024-11-27 00:42:26.808498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.229 [2024-11-27 00:42:26.808504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:50.229 [2024-11-27 00:42:26.808510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:50.229 [2024-11-27 00:42:26.808516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.229 [2024-11-27 00:42:26.808526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:50.229 [2024-11-27 00:42:26.808533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:50.229 [2024-11-27 00:42:26.808539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.229 [2024-11-27 00:42:26.808544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.229 [2024-11-27 00:42:26.808551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.229 [2024-11-27 00:42:26.808557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.229 [2024-11-27 00:42:26.808564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.229 [2024-11-27 00:42:26.808571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.229 [2024-11-27 00:42:26.808578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.229 [2024-11-27 00:42:26.808584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.229 [2024-11-27 00:42:26.808590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.229 [2024-11-27 00:42:26.808595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.229 [2024-11-27 00:42:26.808601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.229 [2024-11-27 00:42:26.808608] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.229 [2024-11-27 00:42:26.808616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.229 [2024-11-27 00:42:26.808631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:50.229 [2024-11-27 00:42:26.808637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.229 [2024-11-27 00:42:26.808643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:50.229 [2024-11-27 00:42:26.808648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:50.229 [2024-11-27 00:42:26.808653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:50.229 [2024-11-27 00:42:26.808658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:50.229 [2024-11-27 00:42:26.808663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:50.229 [2024-11-27 00:42:26.808668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:50.229 [2024-11-27 00:42:26.808673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:50.229 [2024-11-27 00:42:26.808699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.229 [2024-11-27 00:42:26.808706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.229 [2024-11-27 00:42:26.808720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.229 [2024-11-27 00:42:26.808725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.229 [2024-11-27 00:42:26.808730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.229 [2024-11-27 00:42:26.808736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.808741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.229 [2024-11-27 00:42:26.808747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:19:50.229 [2024-11-27 00:42:26.808754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.816652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.816678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.229 [2024-11-27 00:42:26.816689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.861 ms 00:19:50.229 [2024-11-27 00:42:26.816695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.816785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.816796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.229 [2024-11-27 00:42:26.816803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:50.229 [2024-11-27 00:42:26.816809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.842975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.843017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.229 [2024-11-27 00:42:26.843030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.150 ms 00:19:50.229 [2024-11-27 00:42:26.843039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.843126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.843142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.229 [2024-11-27 00:42:26.843151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:50.229 [2024-11-27 00:42:26.843163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.843490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.843518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.229 [2024-11-27 00:42:26.843528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:50.229 [2024-11-27 00:42:26.843537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.843679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.843699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.229 [2024-11-27 00:42:26.843709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:50.229 [2024-11-27 00:42:26.843718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.849203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.849241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.229 [2024-11-27 00:42:26.849251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.460 ms 00:19:50.229 [2024-11-27 00:42:26.849263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.851740] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:50.229 [2024-11-27 00:42:26.851778] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.229 [2024-11-27 00:42:26.851796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.851805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.229 [2024-11-27 00:42:26.851814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.447 ms 00:19:50.229 [2024-11-27 00:42:26.851821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.863459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.863488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.229 [2024-11-27 00:42:26.863497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.529 ms 00:19:50.229 [2024-11-27 00:42:26.863503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.865156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.865182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.229 [2024-11-27 00:42:26.865189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:19:50.229 [2024-11-27 00:42:26.865195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.866476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.866502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.229 [2024-11-27 00:42:26.866509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:19:50.229 [2024-11-27 00:42:26.866519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.229 [2024-11-27 00:42:26.866762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.229 [2024-11-27 00:42:26.866777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.230 [2024-11-27 00:42:26.866783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:19:50.230 [2024-11-27 00:42:26.866789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.881430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.881469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.230 [2024-11-27 00:42:26.881477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.625 ms 00:19:50.230 [2024-11-27 00:42:26.881485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.887147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.230 [2024-11-27 00:42:26.899101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.899130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.230 [2024-11-27 00:42:26.899143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.565 ms 00:19:50.230 [2024-11-27 00:42:26.899149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.899223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.899231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.230 [2024-11-27 00:42:26.899242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:50.230 [2024-11-27 00:42:26.899248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.899285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.899291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.230 [2024-11-27 00:42:26.899298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:50.230 [2024-11-27 00:42:26.899304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.899325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.899332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.230 [2024-11-27 00:42:26.899337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.230 [2024-11-27 00:42:26.899346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.899369] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.230 [2024-11-27 00:42:26.899379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.899385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.230 [2024-11-27 00:42:26.899391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:50.230 [2024-11-27 00:42:26.899396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.902488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.902516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.230 [2024-11-27 00:42:26.902524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:19:50.230 [2024-11-27 00:42:26.902534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.902592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.230 [2024-11-27 00:42:26.902599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.230 [2024-11-27 00:42:26.902605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:50.230 [2024-11-27 00:42:26.902611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.230 [2024-11-27 00:42:26.903325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.230 [2024-11-27 00:42:26.904129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.645 ms, result 0 00:19:50.230 [2024-11-27 00:42:26.904910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.230 [2024-11-27 00:42:26.912692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.271  [2024-11-27T00:42:29.003Z] Copying: 21/256 [MB] (21 MBps) [2024-11-27T00:42:29.948Z] Copying: 40/256 [MB] (19 MBps) [2024-11-27T00:42:31.335Z] Copying: 63/256 [MB] (22 MBps) [2024-11-27T00:42:32.277Z] Copying: 79/256 [MB] (16 MBps) [2024-11-27T00:42:33.221Z] Copying: 96/256 [MB] (17 MBps) [2024-11-27T00:42:34.165Z] Copying: 112/256 [MB] (15 MBps) [2024-11-27T00:42:35.108Z] Copying: 126/256 [MB] (13 MBps) [2024-11-27T00:42:36.051Z] Copying: 148/256 [MB] (22 MBps) [2024-11-27T00:42:36.996Z] Copying: 166/256 [MB] (17 MBps) [2024-11-27T00:42:37.938Z] Copying: 186/256 [MB] (19 MBps) [2024-11-27T00:42:39.329Z] Copying: 202/256 [MB] (16 MBps) [2024-11-27T00:42:40.276Z] Copying: 218/256 [MB] (15 MBps) [2024-11-27T00:42:41.222Z] Copying: 230/256 [MB] (12 MBps) [2024-11-27T00:42:41.797Z] Copying: 246/256 [MB] (15 MBps) [2024-11-27T00:42:41.797Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-27 00:42:41.515982] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.010 [2024-11-27 00:42:41.517812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.517889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.010 [2024-11-27 00:42:41.517904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:05.010 [2024-11-27 00:42:41.517913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.517936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:05.010 [2024-11-27 00:42:41.518642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.518687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.010 [2024-11-27 00:42:41.518699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:20:05.010 [2024-11-27 00:42:41.518710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.518995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.519016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.010 [2024-11-27 00:42:41.519030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:05.010 [2024-11-27 00:42:41.519039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.522752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.522781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.010 [2024-11-27 00:42:41.522791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:20:05.010 [2024-11-27 00:42:41.522800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.529753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.529809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:05.010 [2024-11-27 00:42:41.529820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:20:05.010 [2024-11-27 00:42:41.529832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.532705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.532759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.010 [2024-11-27 00:42:41.532769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:20:05.010 [2024-11-27 00:42:41.532777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.538104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.538155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.010 [2024-11-27 00:42:41.538167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.282 ms 00:20:05.010 [2024-11-27 00:42:41.538175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.538317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.538339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.010 [2024-11-27 00:42:41.538352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:05.010 [2024-11-27 00:42:41.538360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.541267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.010 [2024-11-27 00:42:41.541313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:05.010 [2024-11-27 00:42:41.541324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:20:05.010 [2024-11-27 00:42:41.541331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.010 [2024-11-27 00:42:41.543445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.011 [2024-11-27 00:42:41.543495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:05.011 [2024-11-27 00:42:41.543505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:20:05.011 [2024-11-27 00:42:41.543512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.011 [2024-11-27 00:42:41.545260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.011 [2024-11-27 00:42:41.545308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.011 [2024-11-27 00:42:41.545318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:20:05.011 [2024-11-27 00:42:41.545325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.011 [2024-11-27 00:42:41.546972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.011 [2024-11-27 00:42:41.547020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.011 [2024-11-27 00:42:41.547030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:20:05.011 [2024-11-27 00:42:41.547038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.011 [2024-11-27 00:42:41.547080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.011 [2024-11-27 00:42:41.547094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.011 [2024-11-27 00:42:41.547703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.012 [2024-11-27 00:42:41.547902] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.012 [2024-11-27 00:42:41.547910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:20:05.012 [2024-11-27 00:42:41.547920] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:05.012 [2024-11-27 00:42:41.547936] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:05.012 [2024-11-27 00:42:41.547943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:05.012 [2024-11-27 00:42:41.547953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:05.012 [2024-11-27 00:42:41.547961] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.012 [2024-11-27 00:42:41.547974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.012 [2024-11-27 00:42:41.547982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.012 [2024-11-27 00:42:41.547989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.012 [2024-11-27 00:42:41.547995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.012 [2024-11-27 00:42:41.548002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.012 [2024-11-27 00:42:41.548010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.012 [2024-11-27 00:42:41.548021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:20:05.012 [2024-11-27 00:42:41.548028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.550286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.012 [2024-11-27 00:42:41.550322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.012 [2024-11-27 00:42:41.550333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:20:05.012 [2024-11-27 00:42:41.550348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.550472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.012 [2024-11-27 00:42:41.550483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.012 [2024-11-27 00:42:41.550493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:05.012 [2024-11-27 00:42:41.550501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.558198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.558275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.012 [2024-11-27 00:42:41.558293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.558301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.558377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.558386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.012 [2024-11-27 00:42:41.558398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.558411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.558466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.558477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.012 [2024-11-27 00:42:41.558485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.558493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.558517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.558529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.012 [2024-11-27 00:42:41.558536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.558546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.571840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.571904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.012 [2024-11-27 00:42:41.571916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.571930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.581751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.581808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.012 [2024-11-27 00:42:41.581819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.581827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.581925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.581935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.012 [2024-11-27 00:42:41.581944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.581953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.582017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.012 [2024-11-27 00:42:41.582026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.582034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.582121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.012 [2024-11-27 00:42:41.582129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.582141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.582186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.012 [2024-11-27 00:42:41.582193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.582201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.582280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.012 [2024-11-27 00:42:41.582289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.582297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.012 [2024-11-27 00:42:41.582359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.012 [2024-11-27 00:42:41.582368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.012 [2024-11-27 00:42:41.582377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.012 [2024-11-27 00:42:41.582528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.687 ms, result 0 00:20:05.012 00:20:05.012 00:20:05.012 00:42:41 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:05.274 00:42:41 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:05.847 00:42:42 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:05.847 [2024-11-27 00:42:42.430205] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:20:05.847 [2024-11-27 00:42:42.430381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88230 ] 00:20:05.847 [2024-11-27 00:42:42.588259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.847 [2024-11-27 00:42:42.616694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.108 [2024-11-27 00:42:42.734064] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.108 [2024-11-27 00:42:42.734155] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.372 [2024-11-27 00:42:42.894203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.894292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.372 [2024-11-27 00:42:42.894309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.372 [2024-11-27 00:42:42.894317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.897008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.897057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.372 [2024-11-27 00:42:42.897068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:20:06.372 [2024-11-27 00:42:42.897077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.897293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.372 [2024-11-27 00:42:42.897597] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.372 [2024-11-27 00:42:42.897630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.897638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.372 [2024-11-27 00:42:42.897650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:20:06.372 [2024-11-27 00:42:42.897658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.899488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.372 [2024-11-27 00:42:42.903006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.903059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.372 [2024-11-27 00:42:42.903077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:20:06.372 [2024-11-27 00:42:42.903086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.903183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.903195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.372 [2024-11-27 00:42:42.903204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:06.372 [2024-11-27 00:42:42.903212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.911242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.911284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.372 [2024-11-27 00:42:42.911296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.983 ms 00:20:06.372 [2024-11-27 00:42:42.911304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.911447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.911460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.372 [2024-11-27 00:42:42.911469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:06.372 [2024-11-27 00:42:42.911482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.911510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.911519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.372 [2024-11-27 00:42:42.911532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.372 [2024-11-27 00:42:42.911540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.911561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:06.372 [2024-11-27 00:42:42.913674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.913712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.372 [2024-11-27 00:42:42.913723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.118 ms 00:20:06.372 [2024-11-27 00:42:42.913736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.913788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.372 [2024-11-27 00:42:42.913797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.372 [2024-11-27 00:42:42.913809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:06.372 [2024-11-27 00:42:42.913817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.372 [2024-11-27 00:42:42.913835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.372 [2024-11-27 00:42:42.913888] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.372 [2024-11-27 00:42:42.913928] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.372 [2024-11-27 00:42:42.913947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.372 [2024-11-27 00:42:42.914053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.372 [2024-11-27 00:42:42.914066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.372 [2024-11-27 00:42:42.914077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.372 [2024-11-27 00:42:42.914088] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.372 [2024-11-27 00:42:42.914097] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.372 [2024-11-27 00:42:42.914107] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:06.372 [2024-11-27 00:42:42.914115] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.372 [2024-11-27 00:42:42.914123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.373 [2024-11-27 00:42:42.914137] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.373 [2024-11-27 00:42:42.914145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.914153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.373 [2024-11-27 00:42:42.914161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:20:06.373 [2024-11-27 00:42:42.914169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.914278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.914297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.373 [2024-11-27 00:42:42.914306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:06.373 [2024-11-27 00:42:42.914319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.914425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.373 [2024-11-27 00:42:42.914445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.373 [2024-11-27 00:42:42.914455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.373 [2024-11-27 00:42:42.914484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.373 [2024-11-27 00:42:42.914512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.373 [2024-11-27 00:42:42.914530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.373 [2024-11-27 00:42:42.914538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:06.373 [2024-11-27 00:42:42.914546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.373 [2024-11-27 00:42:42.914553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.373 [2024-11-27 00:42:42.914562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:06.373 [2024-11-27 00:42:42.914576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.373 [2024-11-27 00:42:42.914593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.373 [2024-11-27 00:42:42.914618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.373 [2024-11-27 00:42:42.914650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.373 [2024-11-27 00:42:42.914673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.373 [2024-11-27 00:42:42.914698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.373 [2024-11-27 00:42:42.914722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.373 [2024-11-27 00:42:42.914737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.373 [2024-11-27 00:42:42.914745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:06.373 [2024-11-27 00:42:42.914753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.373 [2024-11-27 00:42:42.914761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.373 [2024-11-27 00:42:42.914770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:06.373 [2024-11-27 00:42:42.914779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.373 [2024-11-27 00:42:42.914793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:06.373 [2024-11-27 00:42:42.914800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.373 [2024-11-27 00:42:42.914816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.373 [2024-11-27 00:42:42.914829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.373 [2024-11-27 00:42:42.914846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.373 [2024-11-27 00:42:42.914877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.373 [2024-11-27 00:42:42.914886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.373 [2024-11-27 00:42:42.914895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.373 [2024-11-27 00:42:42.914902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.373 [2024-11-27 00:42:42.914910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.373 [2024-11-27 00:42:42.914919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.373 [2024-11-27 00:42:42.914928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.914939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:06.373 [2024-11-27 00:42:42.914947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:06.373 [2024-11-27 00:42:42.914956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:06.373 [2024-11-27 00:42:42.914965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:06.373 [2024-11-27 00:42:42.914972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:06.373 [2024-11-27 00:42:42.914979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:06.373 [2024-11-27 00:42:42.914987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:06.373 [2024-11-27 00:42:42.914994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:06.373 [2024-11-27 00:42:42.915001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:06.373 [2024-11-27 00:42:42.915008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:06.373 [2024-11-27 00:42:42.915047] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.373 [2024-11-27 00:42:42.915058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.373 [2024-11-27 00:42:42.915077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.373 [2024-11-27 00:42:42.915085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.373 [2024-11-27 00:42:42.915092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.373 [2024-11-27 00:42:42.915101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.915108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.373 [2024-11-27 00:42:42.915117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:20:06.373 [2024-11-27 00:42:42.915124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.929082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.929126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.373 [2024-11-27 00:42:42.929138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.902 ms 00:20:06.373 [2024-11-27 00:42:42.929146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.929289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.929301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.373 [2024-11-27 00:42:42.929311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:06.373 [2024-11-27 00:42:42.929327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.950410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.373 [2024-11-27 00:42:42.950471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.373 [2024-11-27 00:42:42.950493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.059 ms 00:20:06.373 [2024-11-27 00:42:42.950504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.373 [2024-11-27 00:42:42.950616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.950631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.374 [2024-11-27 00:42:42.950644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.374 [2024-11-27 00:42:42.950654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.951267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.951313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.374 [2024-11-27 00:42:42.951328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:06.374 [2024-11-27 00:42:42.951339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.951529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.951545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.374 [2024-11-27 00:42:42.951556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:06.374 [2024-11-27 00:42:42.951571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.960239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.960292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.374 [2024-11-27 00:42:42.960309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.639 ms 00:20:06.374 [2024-11-27 00:42:42.960320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.964217] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:06.374 [2024-11-27 00:42:42.964270] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.374 [2024-11-27 00:42:42.964282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.964291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.374 [2024-11-27 00:42:42.964300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:20:06.374 [2024-11-27 00:42:42.964308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.980550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.980621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.374 [2024-11-27 00:42:42.980638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.159 ms 00:20:06.374 [2024-11-27 00:42:42.980646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.983689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.983740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.374 [2024-11-27 00:42:42.983751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:20:06.374 [2024-11-27 00:42:42.983759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.986400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.986445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.374 [2024-11-27 00:42:42.986456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:20:06.374 [2024-11-27 00:42:42.986463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:42.986836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:42.986879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.374 [2024-11-27 00:42:42.986891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:06.374 [2024-11-27 00:42:42.986899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.012589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.012639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.374 [2024-11-27 00:42:43.012651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.666 ms 00:20:06.374 [2024-11-27 00:42:43.012660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.020869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.374 [2024-11-27 00:42:43.039980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.040028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.374 [2024-11-27 00:42:43.040040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.230 ms 00:20:06.374 [2024-11-27 00:42:43.040049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.040136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.040148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.374 [2024-11-27 00:42:43.040161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:06.374 [2024-11-27 00:42:43.040169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.040226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.040244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.374 [2024-11-27 00:42:43.040253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:06.374 [2024-11-27 00:42:43.040261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.040289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.040299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.374 [2024-11-27 00:42:43.040308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:06.374 [2024-11-27 00:42:43.040319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.040360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.374 [2024-11-27 00:42:43.040370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.040378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.374 [2024-11-27 00:42:43.040386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:06.374 [2024-11-27 00:42:43.040396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.046593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.046645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.374 [2024-11-27 00:42:43.046657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:20:06.374 [2024-11-27 00:42:43.046674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.046770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.374 [2024-11-27 00:42:43.046782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.374 [2024-11-27 00:42:43.046791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:06.374 [2024-11-27 00:42:43.046799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.374 [2024-11-27 00:42:43.048025] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.374 [2024-11-27 00:42:43.049400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.470 ms, result 0 00:20:06.374 [2024-11-27 00:42:43.050752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.374 [2024-11-27 00:42:43.058083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.638  [2024-11-27T00:42:43.425Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-27 00:42:43.310304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.638 [2024-11-27 00:42:43.311443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.311494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.638 [2024-11-27 00:42:43.311507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.638 [2024-11-27 00:42:43.311515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.311536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:06.638 [2024-11-27 00:42:43.312220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.312261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.638 [2024-11-27 00:42:43.312271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:06.638 [2024-11-27 00:42:43.312280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.314777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.314826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.638 [2024-11-27 00:42:43.314849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:20:06.638 [2024-11-27 00:42:43.314878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.319336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.319377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.638 [2024-11-27 00:42:43.319388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:20:06.638 [2024-11-27 00:42:43.319404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.326351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.326412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.638 [2024-11-27 00:42:43.326427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:20:06.638 [2024-11-27 00:42:43.326435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.329402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.329452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.638 [2024-11-27 00:42:43.329462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:20:06.638 [2024-11-27 00:42:43.329470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.334807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.334874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.638 [2024-11-27 00:42:43.334886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:20:06.638 [2024-11-27 00:42:43.334894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.335026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.335039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.638 [2024-11-27 00:42:43.335055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:06.638 [2024-11-27 00:42:43.335067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.338307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.338355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.638 [2024-11-27 00:42:43.338364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:20:06.638 [2024-11-27 00:42:43.338372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.341474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.341523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.638 [2024-11-27 00:42:43.341533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:20:06.638 [2024-11-27 00:42:43.341540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.344035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.344085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.638 [2024-11-27 00:42:43.344094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:20:06.638 [2024-11-27 00:42:43.344102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.346407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.638 [2024-11-27 00:42:43.346454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.638 [2024-11-27 00:42:43.346464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:20:06.638 [2024-11-27 00:42:43.346472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.638 [2024-11-27 00:42:43.346513] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.638 [2024-11-27 00:42:43.346529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.638 [2024-11-27 00:42:43.346644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.346994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.639 [2024-11-27 00:42:43.347358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.639 [2024-11-27 00:42:43.347367] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:20:06.639 [2024-11-27 00:42:43.347376] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.639 [2024-11-27 00:42:43.347384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.639 [2024-11-27 00:42:43.347391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.640 [2024-11-27 00:42:43.347399] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.640 [2024-11-27 00:42:43.347406] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.640 [2024-11-27 00:42:43.347418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.640 [2024-11-27 00:42:43.347426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.640 [2024-11-27 00:42:43.347432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.640 [2024-11-27 00:42:43.347439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.640 [2024-11-27 00:42:43.347446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.640 [2024-11-27 00:42:43.347454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.640 [2024-11-27 00:42:43.347463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:20:06.640 [2024-11-27 00:42:43.347470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.349425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.640 [2024-11-27 00:42:43.349471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.640 [2024-11-27 00:42:43.349482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:20:06.640 [2024-11-27 00:42:43.349493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.349601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.640 [2024-11-27 00:42:43.349612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.640 [2024-11-27 00:42:43.349622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:06.640 [2024-11-27 00:42:43.349629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.357332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.357380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.640 [2024-11-27 00:42:43.357396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.357404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.357468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.357477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.640 [2024-11-27 00:42:43.357490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.357498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.357561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.357572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.640 [2024-11-27 00:42:43.357581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.357597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.357614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.357626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.640 [2024-11-27 00:42:43.357634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.357642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.371454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.371503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.640 [2024-11-27 00:42:43.371521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.371535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.640 [2024-11-27 00:42:43.382283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.640 [2024-11-27 00:42:43.382377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.640 [2024-11-27 00:42:43.382443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.640 [2024-11-27 00:42:43.382554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.640 [2024-11-27 00:42:43.382622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.640 [2024-11-27 00:42:43.382695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.640 [2024-11-27 00:42:43.382764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.640 [2024-11-27 00:42:43.382773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.640 [2024-11-27 00:42:43.382786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.640 [2024-11-27 00:42:43.382969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.506 ms, result 0 00:20:06.901 00:20:06.901 00:20:06.901 00:42:43 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=88244 00:20:06.901 00:42:43 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 88244 00:20:06.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88244 ']' 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.901 00:42:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:06.901 00:42:43 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:06.901 [2024-11-27 00:42:43.663737] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:20:06.901 [2024-11-27 00:42:43.663849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88244 ] 00:20:07.164 [2024-11-27 00:42:43.815505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.164 [2024-11-27 00:42:43.844201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.108 00:42:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.108 00:42:44 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:08.108 00:42:44 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:08.108 [2024-11-27 00:42:44.747275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.108 [2024-11-27 00:42:44.747363] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:08.371 [2024-11-27 00:42:44.924783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.924849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:08.371 [2024-11-27 00:42:44.924888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.371 [2024-11-27 00:42:44.924899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.927790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.927879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.371 [2024-11-27 00:42:44.927893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:20:08.371 [2024-11-27 00:42:44.927903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.928051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:08.371 [2024-11-27 00:42:44.928334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:08.371 [2024-11-27 00:42:44.928352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.928363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.371 [2024-11-27 00:42:44.928374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:08.371 [2024-11-27 00:42:44.928386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.930256] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:08.371 [2024-11-27 00:42:44.934054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.934102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:08.371 [2024-11-27 00:42:44.934117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.795 ms 00:20:08.371 [2024-11-27 00:42:44.934125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.934207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.934217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:08.371 [2024-11-27 00:42:44.934246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:08.371 [2024-11-27 00:42:44.934258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.942391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.942591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.371 [2024-11-27 00:42:44.942618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.079 ms 00:20:08.371 [2024-11-27 00:42:44.942626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.942750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.942761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.371 [2024-11-27 00:42:44.942777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:08.371 [2024-11-27 00:42:44.942785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.942815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.371 [2024-11-27 00:42:44.942826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:08.371 [2024-11-27 00:42:44.942837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:08.371 [2024-11-27 00:42:44.942849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.371 [2024-11-27 00:42:44.942900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:08.372 [2024-11-27 00:42:44.944846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.372 [2024-11-27 00:42:44.944931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.372 [2024-11-27 00:42:44.944944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.954 ms 00:20:08.372 [2024-11-27 00:42:44.944958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.372 [2024-11-27 00:42:44.944998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.372 [2024-11-27 00:42:44.945010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:08.372 [2024-11-27 00:42:44.945019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:08.372 [2024-11-27 00:42:44.945029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.372 [2024-11-27 00:42:44.945051] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:08.372 [2024-11-27 00:42:44.945075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:08.372 [2024-11-27 00:42:44.945120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:08.372 [2024-11-27 00:42:44.945141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:08.372 [2024-11-27 00:42:44.945248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:08.372 [2024-11-27 00:42:44.945270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:08.372 [2024-11-27 00:42:44.945285] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:08.372 [2024-11-27 00:42:44.945298] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:08.372 [2024-11-27 00:42:44.945308] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:08.372 [2024-11-27 00:42:44.945321] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:08.372 [2024-11-27 00:42:44.945334] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:08.372 [2024-11-27 00:42:44.945346] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:08.372 [2024-11-27 00:42:44.945354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:08.372 [2024-11-27 00:42:44.945364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.372 [2024-11-27 00:42:44.945372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:08.372 [2024-11-27 00:42:44.945382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:08.372 [2024-11-27 00:42:44.945391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.372 [2024-11-27 00:42:44.945482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.372 [2024-11-27 00:42:44.945493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:08.372 [2024-11-27 00:42:44.945504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:08.372 [2024-11-27 00:42:44.945512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.372 [2024-11-27 00:42:44.945620] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:08.372 [2024-11-27 00:42:44.945633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:08.372 [2024-11-27 00:42:44.945650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.372 [2024-11-27 00:42:44.945660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:08.372 [2024-11-27 00:42:44.945690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:08.372 [2024-11-27 00:42:44.945710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:08.372 [2024-11-27 00:42:44.945722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.372 [2024-11-27 00:42:44.945740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:08.372 [2024-11-27 00:42:44.945749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:08.372 [2024-11-27 00:42:44.945760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:08.372 [2024-11-27 00:42:44.945769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:08.372 [2024-11-27 00:42:44.945780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:08.372 [2024-11-27 00:42:44.945787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:08.372 [2024-11-27 00:42:44.945806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:08.372 [2024-11-27 00:42:44.945818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:08.372 [2024-11-27 00:42:44.945837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:08.372 [2024-11-27 00:42:44.945845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.372 [2024-11-27 00:42:44.946088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:08.372 [2024-11-27 00:42:44.946127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:08.372 [2024-11-27 00:42:44.946149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.372 [2024-11-27 00:42:44.946169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:08.372 [2024-11-27 00:42:44.946193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:08.372 [2024-11-27 00:42:44.946214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.372 [2024-11-27 00:42:44.946248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:08.372 [2024-11-27 00:42:44.946269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:08.372 [2024-11-27 00:42:44.946290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:08.372 [2024-11-27 00:42:44.946311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:08.372 [2024-11-27 00:42:44.946332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:08.372 [2024-11-27 00:42:44.946351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.372 [2024-11-27 00:42:44.946431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:08.372 [2024-11-27 00:42:44.946455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:08.372 [2024-11-27 00:42:44.946478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:08.372 [2024-11-27 00:42:44.946499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:08.372 [2024-11-27 00:42:44.947030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:08.372 [2024-11-27 00:42:44.947087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.947114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:08.372 [2024-11-27 00:42:44.947137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:08.372 [2024-11-27 00:42:44.947211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.947223] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:08.372 [2024-11-27 00:42:44.947234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:08.372 [2024-11-27 00:42:44.947243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:08.372 [2024-11-27 00:42:44.947254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:08.372 [2024-11-27 00:42:44.947263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:08.372 [2024-11-27 00:42:44.947273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:08.372 [2024-11-27 00:42:44.947283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:08.372 [2024-11-27 00:42:44.947293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:08.372 [2024-11-27 00:42:44.947300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:08.372 [2024-11-27 00:42:44.947314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:08.372 [2024-11-27 00:42:44.947324] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:08.372 [2024-11-27 00:42:44.947339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.372 [2024-11-27 00:42:44.947351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:08.373 [2024-11-27 00:42:44.947361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:08.373 [2024-11-27 00:42:44.947370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:08.373 [2024-11-27 00:42:44.947379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:08.373 [2024-11-27 00:42:44.947387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:08.373 [2024-11-27 00:42:44.947396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:08.373 [2024-11-27 00:42:44.947403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:08.373 [2024-11-27 00:42:44.947412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:08.373 [2024-11-27 00:42:44.947421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:08.373 [2024-11-27 00:42:44.947430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:08.373 [2024-11-27 00:42:44.947474] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:08.373 [2024-11-27 00:42:44.947485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:08.373 [2024-11-27 00:42:44.947506] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:08.373 [2024-11-27 00:42:44.947513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:08.373 [2024-11-27 00:42:44.947522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:08.373 [2024-11-27 00:42:44.947533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.947544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:08.373 [2024-11-27 00:42:44.947553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:20:08.373 [2024-11-27 00:42:44.947564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.961620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.961672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.373 [2024-11-27 00:42:44.961684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.945 ms 00:20:08.373 [2024-11-27 00:42:44.961703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.961834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.961851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:08.373 [2024-11-27 00:42:44.961891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:08.373 [2024-11-27 00:42:44.961902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.974479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.974530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.373 [2024-11-27 00:42:44.974541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.548 ms 00:20:08.373 [2024-11-27 00:42:44.974559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.974626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.974638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.373 [2024-11-27 00:42:44.974648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.373 [2024-11-27 00:42:44.974658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.975196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.975234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.373 [2024-11-27 00:42:44.975247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:08.373 [2024-11-27 00:42:44.975259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.975416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.975434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.373 [2024-11-27 00:42:44.975445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:08.373 [2024-11-27 00:42:44.975456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.983783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.983834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.373 [2024-11-27 00:42:44.983845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.297 ms 00:20:08.373 [2024-11-27 00:42:44.983879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:44.995834] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:08.373 [2024-11-27 00:42:44.995909] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:08.373 [2024-11-27 00:42:44.995925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:44.995936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:08.373 [2024-11-27 00:42:44.995948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.945 ms 00:20:08.373 [2024-11-27 00:42:44.995959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.012064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.012121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:08.373 [2024-11-27 00:42:45.012140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.043 ms 00:20:08.373 [2024-11-27 00:42:45.012154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.015327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.015543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:08.373 [2024-11-27 00:42:45.015563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:20:08.373 [2024-11-27 00:42:45.015573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.018471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.018525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:08.373 [2024-11-27 00:42:45.018536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:20:08.373 [2024-11-27 00:42:45.018545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.019057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.019121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:08.373 [2024-11-27 00:42:45.019146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:20:08.373 [2024-11-27 00:42:45.019169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.046090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.046274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:08.373 [2024-11-27 00:42:45.046338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.837 ms 00:20:08.373 [2024-11-27 00:42:45.046359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.054696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:08.373 [2024-11-27 00:42:45.073412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.373 [2024-11-27 00:42:45.073601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:08.373 [2024-11-27 00:42:45.073627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.970 ms 00:20:08.373 [2024-11-27 00:42:45.073636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.373 [2024-11-27 00:42:45.073726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.073740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:08.374 [2024-11-27 00:42:45.073753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:08.374 [2024-11-27 00:42:45.073762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.073828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.073840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:08.374 [2024-11-27 00:42:45.073885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:08.374 [2024-11-27 00:42:45.073894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.073922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.073931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:08.374 [2024-11-27 00:42:45.073951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.374 [2024-11-27 00:42:45.073958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.073996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:08.374 [2024-11-27 00:42:45.074008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.074023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:08.374 [2024-11-27 00:42:45.074032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:08.374 [2024-11-27 00:42:45.074042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.080097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.080284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:08.374 [2024-11-27 00:42:45.080305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.031 ms 00:20:08.374 [2024-11-27 00:42:45.080322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.080407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.374 [2024-11-27 00:42:45.080420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:08.374 [2024-11-27 00:42:45.080430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:08.374 [2024-11-27 00:42:45.080439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.374 [2024-11-27 00:42:45.081626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.374 [2024-11-27 00:42:45.083056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.547 ms, result 0 00:20:08.374 [2024-11-27 00:42:45.085031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.374 Some configs were skipped because the RPC state that can call them passed over. 00:20:08.374 00:42:45 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:08.635 [2024-11-27 00:42:45.326885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.635 [2024-11-27 00:42:45.327058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:08.635 [2024-11-27 00:42:45.327127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:20:08.635 [2024-11-27 00:42:45.327152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.635 [2024-11-27 00:42:45.327208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.558 ms, result 0 00:20:08.635 true 00:20:08.635 00:42:45 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:08.897 [2024-11-27 00:42:45.542786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.897 [2024-11-27 00:42:45.542983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:08.897 [2024-11-27 00:42:45.543053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.857 ms 00:20:08.897 [2024-11-27 00:42:45.543080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.897 [2024-11-27 00:42:45.543137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.205 ms, result 0 00:20:08.897 true 00:20:08.897 00:42:45 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 88244 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88244 ']' 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88244 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88244 00:20:08.897 killing process with pid 88244 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88244' 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88244 00:20:08.897 00:42:45 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88244 00:20:09.159 [2024-11-27 00:42:45.712273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.159 [2024-11-27 00:42:45.712331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:09.159 [2024-11-27 00:42:45.712346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:09.159 [2024-11-27 00:42:45.712360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.159 [2024-11-27 00:42:45.712386] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:09.159 [2024-11-27 00:42:45.712922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.159 [2024-11-27 00:42:45.712950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:09.159 [2024-11-27 00:42:45.712960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:20:09.159 [2024-11-27 00:42:45.712969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.159 [2024-11-27 00:42:45.713255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.159 [2024-11-27 00:42:45.713270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:09.159 [2024-11-27 00:42:45.713279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:09.159 [2024-11-27 00:42:45.713291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.159 [2024-11-27 00:42:45.717910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.159 [2024-11-27 00:42:45.717949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:09.159 [2024-11-27 00:42:45.717960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.599 ms 00:20:09.159 [2024-11-27 00:42:45.717979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.724950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.724989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:09.160 [2024-11-27 00:42:45.725000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:20:09.160 [2024-11-27 00:42:45.725012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.727692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.727739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:09.160 [2024-11-27 00:42:45.727749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:20:09.160 [2024-11-27 00:42:45.727759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.732689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.732735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:09.160 [2024-11-27 00:42:45.732748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.890 ms 00:20:09.160 [2024-11-27 00:42:45.732758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.732911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.732926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:09.160 [2024-11-27 00:42:45.732935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:20:09.160 [2024-11-27 00:42:45.732945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.736089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.736131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:09.160 [2024-11-27 00:42:45.736141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:20:09.160 [2024-11-27 00:42:45.736155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.738800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.738844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:09.160 [2024-11-27 00:42:45.738871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:20:09.160 [2024-11-27 00:42:45.738882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.740698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.740741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:09.160 [2024-11-27 00:42:45.740750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:20:09.160 [2024-11-27 00:42:45.740759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.742784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.160 [2024-11-27 00:42:45.742995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:09.160 [2024-11-27 00:42:45.743012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:20:09.160 [2024-11-27 00:42:45.743021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.160 [2024-11-27 00:42:45.743056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:09.160 [2024-11-27 00:42:45.743072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:09.160 [2024-11-27 00:42:45.743620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:09.161 [2024-11-27 00:42:45.743997] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:09.161 [2024-11-27 00:42:45.744008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:20:09.161 [2024-11-27 00:42:45.744020] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:09.161 [2024-11-27 00:42:45.744028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:09.161 [2024-11-27 00:42:45.744038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:09.161 [2024-11-27 00:42:45.744050] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:09.161 [2024-11-27 00:42:45.744061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:09.161 [2024-11-27 00:42:45.744069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:09.161 [2024-11-27 00:42:45.744079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:09.161 [2024-11-27 00:42:45.744086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:09.161 [2024-11-27 00:42:45.744096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:09.161 [2024-11-27 00:42:45.744103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.161 [2024-11-27 00:42:45.744112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:09.161 [2024-11-27 00:42:45.744122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:20:09.161 [2024-11-27 00:42:45.744133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.745944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.161 [2024-11-27 00:42:45.745974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:09.161 [2024-11-27 00:42:45.745985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.792 ms 00:20:09.161 [2024-11-27 00:42:45.745994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.746121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.161 [2024-11-27 00:42:45.746133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:09.161 [2024-11-27 00:42:45.746142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:09.161 [2024-11-27 00:42:45.746153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.752629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.752766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.161 [2024-11-27 00:42:45.752827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.752867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.752952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.752981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.161 [2024-11-27 00:42:45.753052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.753086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.753146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.753507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.161 [2024-11-27 00:42:45.753558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.753585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.753688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.753724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.161 [2024-11-27 00:42:45.753785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.753811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.765358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.765535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.161 [2024-11-27 00:42:45.765552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.765562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.774529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.774668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.161 [2024-11-27 00:42:45.774721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.774750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.774812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.774839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.161 [2024-11-27 00:42:45.774884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.774908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.774952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.775029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.161 [2024-11-27 00:42:45.775056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.775079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.775171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.775203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.161 [2024-11-27 00:42:45.775224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.775246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.775346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.161 [2024-11-27 00:42:45.775378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:09.161 [2024-11-27 00:42:45.775400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.161 [2024-11-27 00:42:45.775423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.161 [2024-11-27 00:42:45.775642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.162 [2024-11-27 00:42:45.775690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.162 [2024-11-27 00:42:45.775712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.162 [2024-11-27 00:42:45.775734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.162 [2024-11-27 00:42:45.775800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.162 [2024-11-27 00:42:45.775890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.162 [2024-11-27 00:42:45.775916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.162 [2024-11-27 00:42:45.775971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.162 [2024-11-27 00:42:45.776142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.843 ms, result 0 00:20:09.423 00:42:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.423 [2024-11-27 00:42:46.050146] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:20:09.423 [2024-11-27 00:42:46.050639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88280 ] 00:20:09.683 [2024-11-27 00:42:46.212431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.683 [2024-11-27 00:42:46.232358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.683 [2024-11-27 00:42:46.330667] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.683 [2024-11-27 00:42:46.330742] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.947 [2024-11-27 00:42:46.492161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.492217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:09.947 [2024-11-27 00:42:46.492233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:09.947 [2024-11-27 00:42:46.492249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.494887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.494939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.947 [2024-11-27 00:42:46.494951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.617 ms 00:20:09.947 [2024-11-27 00:42:46.494963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.495073] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:09.947 [2024-11-27 00:42:46.495349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:09.947 [2024-11-27 00:42:46.495369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.495380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.947 [2024-11-27 00:42:46.495390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:09.947 [2024-11-27 00:42:46.495398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.497137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:09.947 [2024-11-27 00:42:46.500913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.500959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:09.947 [2024-11-27 00:42:46.500975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.779 ms 00:20:09.947 [2024-11-27 00:42:46.500984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.501068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.501079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:09.947 [2024-11-27 00:42:46.501088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:09.947 [2024-11-27 00:42:46.501096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.509254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.509299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.947 [2024-11-27 00:42:46.509311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:20:09.947 [2024-11-27 00:42:46.509319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.509460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.509472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.947 [2024-11-27 00:42:46.509482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:09.947 [2024-11-27 00:42:46.509495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.509525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.509535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:09.947 [2024-11-27 00:42:46.509544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:09.947 [2024-11-27 00:42:46.509552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.509577] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:09.947 [2024-11-27 00:42:46.511708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.511757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.947 [2024-11-27 00:42:46.511768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:20:09.947 [2024-11-27 00:42:46.511779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.511828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.511838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:09.947 [2024-11-27 00:42:46.511848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:09.947 [2024-11-27 00:42:46.511879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.511898] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:09.947 [2024-11-27 00:42:46.511920] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:09.947 [2024-11-27 00:42:46.511961] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:09.947 [2024-11-27 00:42:46.511985] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:09.947 [2024-11-27 00:42:46.512092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:09.947 [2024-11-27 00:42:46.512105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:09.947 [2024-11-27 00:42:46.512120] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:09.947 [2024-11-27 00:42:46.512131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512155] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:09.947 [2024-11-27 00:42:46.512163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:09.947 [2024-11-27 00:42:46.512171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:09.947 [2024-11-27 00:42:46.512183] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:09.947 [2024-11-27 00:42:46.512194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.512208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:09.947 [2024-11-27 00:42:46.512216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:09.947 [2024-11-27 00:42:46.512223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.512313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.947 [2024-11-27 00:42:46.512324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:09.947 [2024-11-27 00:42:46.512337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:09.947 [2024-11-27 00:42:46.512344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.947 [2024-11-27 00:42:46.512452] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:09.947 [2024-11-27 00:42:46.512474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:09.947 [2024-11-27 00:42:46.512484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:09.947 [2024-11-27 00:42:46.512518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:09.947 [2024-11-27 00:42:46.512547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.947 [2024-11-27 00:42:46.512563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:09.947 [2024-11-27 00:42:46.512572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:09.947 [2024-11-27 00:42:46.512581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.947 [2024-11-27 00:42:46.512590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:09.947 [2024-11-27 00:42:46.512598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:09.947 [2024-11-27 00:42:46.512606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:09.947 [2024-11-27 00:42:46.512626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:09.947 [2024-11-27 00:42:46.512651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:09.947 [2024-11-27 00:42:46.512684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:09.947 [2024-11-27 00:42:46.512710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:09.947 [2024-11-27 00:42:46.512718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.947 [2024-11-27 00:42:46.512728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:09.948 [2024-11-27 00:42:46.512736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:09.948 [2024-11-27 00:42:46.512744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.948 [2024-11-27 00:42:46.512751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:09.948 [2024-11-27 00:42:46.512760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:09.948 [2024-11-27 00:42:46.512768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.948 [2024-11-27 00:42:46.512776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:09.948 [2024-11-27 00:42:46.512783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:09.948 [2024-11-27 00:42:46.512791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.948 [2024-11-27 00:42:46.512800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:09.948 [2024-11-27 00:42:46.512808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:09.948 [2024-11-27 00:42:46.512818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.948 [2024-11-27 00:42:46.512825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:09.948 [2024-11-27 00:42:46.512835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:09.948 [2024-11-27 00:42:46.512842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.948 [2024-11-27 00:42:46.512850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:09.948 [2024-11-27 00:42:46.512876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:09.948 [2024-11-27 00:42:46.512885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.948 [2024-11-27 00:42:46.512895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.948 [2024-11-27 00:42:46.512902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:09.948 [2024-11-27 00:42:46.512909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:09.948 [2024-11-27 00:42:46.512916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:09.948 [2024-11-27 00:42:46.512923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:09.948 [2024-11-27 00:42:46.512933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:09.948 [2024-11-27 00:42:46.512942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:09.948 [2024-11-27 00:42:46.512951] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:09.948 [2024-11-27 00:42:46.512964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.512976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:09.948 [2024-11-27 00:42:46.512985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:09.948 [2024-11-27 00:42:46.512993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:09.948 [2024-11-27 00:42:46.513001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:09.948 [2024-11-27 00:42:46.513009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:09.948 [2024-11-27 00:42:46.513016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:09.948 [2024-11-27 00:42:46.513023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:09.948 [2024-11-27 00:42:46.513032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:09.948 [2024-11-27 00:42:46.513040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:09.948 [2024-11-27 00:42:46.513047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:09.948 [2024-11-27 00:42:46.513087] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:09.948 [2024-11-27 00:42:46.513097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:09.948 [2024-11-27 00:42:46.513118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:09.948 [2024-11-27 00:42:46.513126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:09.948 [2024-11-27 00:42:46.513134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:09.948 [2024-11-27 00:42:46.513143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.513151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:09.948 [2024-11-27 00:42:46.513159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:20:09.948 [2024-11-27 00:42:46.513167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.528771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.528818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.948 [2024-11-27 00:42:46.528832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.551 ms 00:20:09.948 [2024-11-27 00:42:46.528841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.529014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.529028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:09.948 [2024-11-27 00:42:46.529037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:09.948 [2024-11-27 00:42:46.529045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.549358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.549584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.948 [2024-11-27 00:42:46.549610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.287 ms 00:20:09.948 [2024-11-27 00:42:46.549622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.549741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.549758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.948 [2024-11-27 00:42:46.549770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:09.948 [2024-11-27 00:42:46.549781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.550392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.550442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.948 [2024-11-27 00:42:46.550457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:20:09.948 [2024-11-27 00:42:46.550468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.550657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.550674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.948 [2024-11-27 00:42:46.550690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:20:09.948 [2024-11-27 00:42:46.550700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.559335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.559389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.948 [2024-11-27 00:42:46.559403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.606 ms 00:20:09.948 [2024-11-27 00:42:46.559411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.563322] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:09.948 [2024-11-27 00:42:46.563375] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:09.948 [2024-11-27 00:42:46.563389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.563398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:09.948 [2024-11-27 00:42:46.563407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:20:09.948 [2024-11-27 00:42:46.563415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.579605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.579653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:09.948 [2024-11-27 00:42:46.579666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.106 ms 00:20:09.948 [2024-11-27 00:42:46.579674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.582658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.582704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:09.948 [2024-11-27 00:42:46.582715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:20:09.948 [2024-11-27 00:42:46.582723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.585405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.585449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:09.948 [2024-11-27 00:42:46.585459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:20:09.948 [2024-11-27 00:42:46.585466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.585819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.948 [2024-11-27 00:42:46.585833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:09.948 [2024-11-27 00:42:46.585850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:09.948 [2024-11-27 00:42:46.585890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.948 [2024-11-27 00:42:46.612398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.612453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:09.949 [2024-11-27 00:42:46.612472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.484 ms 00:20:09.949 [2024-11-27 00:42:46.612481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.620863] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:09.949 [2024-11-27 00:42:46.640455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.640506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:09.949 [2024-11-27 00:42:46.640519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.875 ms 00:20:09.949 [2024-11-27 00:42:46.640528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.640620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.640632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:09.949 [2024-11-27 00:42:46.640646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:09.949 [2024-11-27 00:42:46.640660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.640719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.640731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:09.949 [2024-11-27 00:42:46.640740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:09.949 [2024-11-27 00:42:46.640749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.640777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.640786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:09.949 [2024-11-27 00:42:46.640798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:09.949 [2024-11-27 00:42:46.640809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.640890] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:09.949 [2024-11-27 00:42:46.640908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.640922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:09.949 [2024-11-27 00:42:46.640930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:09.949 [2024-11-27 00:42:46.640939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.647034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.647246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:09.949 [2024-11-27 00:42:46.647266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.069 ms 00:20:09.949 [2024-11-27 00:42:46.647281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.647369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.949 [2024-11-27 00:42:46.647381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:09.949 [2024-11-27 00:42:46.647391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:09.949 [2024-11-27 00:42:46.647399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.949 [2024-11-27 00:42:46.648484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:09.949 [2024-11-27 00:42:46.649911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.007 ms, result 0 00:20:09.949 [2024-11-27 00:42:46.651310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:09.949 [2024-11-27 00:42:46.658526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.335  [2024-11-27T00:42:49.066Z] Copying: 22/256 [MB] (22 MBps) [2024-11-27T00:42:50.009Z] Copying: 42/256 [MB] (19 MBps) [2024-11-27T00:42:50.952Z] Copying: 60/256 [MB] (17 MBps) [2024-11-27T00:42:51.897Z] Copying: 73/256 [MB] (13 MBps) [2024-11-27T00:42:52.842Z] Copying: 90/256 [MB] (16 MBps) [2024-11-27T00:42:53.785Z] Copying: 101/256 [MB] (11 MBps) [2024-11-27T00:42:54.726Z] Copying: 115/256 [MB] (13 MBps) [2024-11-27T00:42:56.112Z] Copying: 126/256 [MB] (10 MBps) [2024-11-27T00:42:57.054Z] Copying: 138/256 [MB] (12 MBps) [2024-11-27T00:42:57.997Z] Copying: 153/256 [MB] (15 MBps) [2024-11-27T00:42:58.941Z] Copying: 166/256 [MB] (12 MBps) [2024-11-27T00:42:59.881Z] Copying: 185/256 [MB] (19 MBps) [2024-11-27T00:43:00.826Z] Copying: 205/256 [MB] (20 MBps) [2024-11-27T00:43:01.839Z] Copying: 223/256 [MB] (17 MBps) [2024-11-27T00:43:02.785Z] Copying: 239/256 [MB] (16 MBps) [2024-11-27T00:43:03.048Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-27 00:43:02.966439] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.261 [2024-11-27 00:43:02.968706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.968778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.261 [2024-11-27 00:43:02.968802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.261 [2024-11-27 00:43:02.968817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.968882] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:26.261 [2024-11-27 00:43:02.969670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.969718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.261 [2024-11-27 00:43:02.969739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:20:26.261 [2024-11-27 00:43:02.969757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.970342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.970378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.261 [2024-11-27 00:43:02.970404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:26.261 [2024-11-27 00:43:02.970421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.974344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.974372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.261 [2024-11-27 00:43:02.974383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.893 ms 00:20:26.261 [2024-11-27 00:43:02.974399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.982021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.982072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.261 [2024-11-27 00:43:02.982083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.601 ms 00:20:26.261 [2024-11-27 00:43:02.982095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.985230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.985282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.261 [2024-11-27 00:43:02.985293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:20:26.261 [2024-11-27 00:43:02.985301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.990245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.990292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.261 [2024-11-27 00:43:02.990303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:20:26.261 [2024-11-27 00:43:02.990311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.990445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.990457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.261 [2024-11-27 00:43:02.990475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:26.261 [2024-11-27 00:43:02.990484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.993983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.994027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.261 [2024-11-27 00:43:02.994037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:20:26.261 [2024-11-27 00:43:02.994046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.996802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.997026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.261 [2024-11-27 00:43:02.997045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:20:26.261 [2024-11-27 00:43:02.997053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:02.999322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:02.999368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.261 [2024-11-27 00:43:02.999379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:20:26.261 [2024-11-27 00:43:02.999387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:03.001637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.261 [2024-11-27 00:43:03.001681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.261 [2024-11-27 00:43:03.001692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:20:26.261 [2024-11-27 00:43:03.001700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.261 [2024-11-27 00:43:03.001743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.261 [2024-11-27 00:43:03.001760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.001993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.262 [2024-11-27 00:43:03.002525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.263 [2024-11-27 00:43:03.002636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.263 [2024-11-27 00:43:03.002644] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4eaa5371-a878-4f7e-9799-2a3c6c0cfb36 00:20:26.263 [2024-11-27 00:43:03.002654] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.263 [2024-11-27 00:43:03.002663] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.263 [2024-11-27 00:43:03.002675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.263 [2024-11-27 00:43:03.002684] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.263 [2024-11-27 00:43:03.002692] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.263 [2024-11-27 00:43:03.002704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.263 [2024-11-27 00:43:03.002712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.263 [2024-11-27 00:43:03.002719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.263 [2024-11-27 00:43:03.002725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.263 [2024-11-27 00:43:03.002733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.263 [2024-11-27 00:43:03.002748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.263 [2024-11-27 00:43:03.002757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:20:26.263 [2024-11-27 00:43:03.002766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.005114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.263 [2024-11-27 00:43:03.005146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.263 [2024-11-27 00:43:03.005157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:20:26.263 [2024-11-27 00:43:03.005171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.005292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.263 [2024-11-27 00:43:03.005302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.263 [2024-11-27 00:43:03.005311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:26.263 [2024-11-27 00:43:03.005321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.012815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.013025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.263 [2024-11-27 00:43:03.013051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.013060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.013146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.013157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.263 [2024-11-27 00:43:03.013165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.013173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.013225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.013236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.263 [2024-11-27 00:43:03.013244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.013252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.013274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.013282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.263 [2024-11-27 00:43:03.013291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.013300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.026778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.026829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.263 [2024-11-27 00:43:03.026840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.026878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.037803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.037866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.263 [2024-11-27 00:43:03.037878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.037887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.037937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.037948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.263 [2024-11-27 00:43:03.037957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.037966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.038014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.263 [2024-11-27 00:43:03.038023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.038033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.038121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.263 [2024-11-27 00:43:03.038130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.038138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.038186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.263 [2024-11-27 00:43:03.038195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.038203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.038276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.263 [2024-11-27 00:43:03.038286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.038294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.263 [2024-11-27 00:43:03.038364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.263 [2024-11-27 00:43:03.038373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.263 [2024-11-27 00:43:03.038381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.263 [2024-11-27 00:43:03.038535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.814 ms, result 0 00:20:26.524 00:20:26.524 00:20:26.524 00:43:03 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.097 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:27.097 00:43:03 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:27.097 00:43:03 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:27.097 00:43:03 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:27.097 00:43:03 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:27.097 00:43:03 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:27.360 00:43:03 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:27.360 00:43:03 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 88244 00:20:27.360 00:43:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88244 ']' 00:20:27.360 00:43:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88244 00:20:27.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88244) - No such process 00:20:27.360 00:43:03 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 88244 is not found' 00:20:27.360 Process with pid 88244 is not found 00:20:27.360 ************************************ 00:20:27.360 END TEST ftl_trim 00:20:27.360 ************************************ 00:20:27.360 00:20:27.360 real 1m6.780s 00:20:27.360 user 1m27.004s 00:20:27.360 sys 0m5.293s 00:20:27.360 00:43:03 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.360 00:43:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:27.360 00:43:03 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:27.360 00:43:03 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:27.360 00:43:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.360 00:43:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:27.360 ************************************ 00:20:27.360 START TEST ftl_restore 00:20:27.360 ************************************ 00:20:27.360 00:43:03 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:27.360 * Looking for test storage... 00:20:27.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.360 00:43:04 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.360 --rc genhtml_branch_coverage=1 00:20:27.360 --rc genhtml_function_coverage=1 00:20:27.360 --rc genhtml_legend=1 00:20:27.360 --rc geninfo_all_blocks=1 00:20:27.360 --rc geninfo_unexecuted_blocks=1 00:20:27.360 00:20:27.360 ' 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.360 --rc genhtml_branch_coverage=1 00:20:27.360 --rc genhtml_function_coverage=1 00:20:27.360 --rc genhtml_legend=1 00:20:27.360 --rc geninfo_all_blocks=1 00:20:27.360 --rc geninfo_unexecuted_blocks=1 00:20:27.360 00:20:27.360 ' 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.360 --rc genhtml_branch_coverage=1 00:20:27.360 --rc genhtml_function_coverage=1 00:20:27.360 --rc genhtml_legend=1 00:20:27.360 --rc geninfo_all_blocks=1 00:20:27.360 --rc geninfo_unexecuted_blocks=1 00:20:27.360 00:20:27.360 ' 00:20:27.360 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:27.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.360 --rc genhtml_branch_coverage=1 00:20:27.360 --rc genhtml_function_coverage=1 00:20:27.360 --rc genhtml_legend=1 00:20:27.360 --rc geninfo_all_blocks=1 00:20:27.360 --rc geninfo_unexecuted_blocks=1 00:20:27.360 00:20:27.360 ' 00:20:27.360 00:43:04 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:27.360 00:43:04 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:27.622 00:43:04 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.622 00:43:04 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.nJXuY3wo5Q 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88539 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88539 00:20:27.623 00:43:04 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88539 ']' 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.623 00:43:04 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:27.623 [2024-11-27 00:43:04.254522] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:20:27.623 [2024-11-27 00:43:04.255129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88539 ] 00:20:27.884 [2024-11-27 00:43:04.418557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.884 [2024-11-27 00:43:04.447147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.457 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.457 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:28.457 00:43:05 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:28.718 00:43:05 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:28.718 00:43:05 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:28.718 00:43:05 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:28.718 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:28.718 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:28.718 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:28.718 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:28.718 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:28.979 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:28.979 { 00:20:28.979 "name": "nvme0n1", 00:20:28.979 "aliases": [ 00:20:28.979 "50080acb-eeb4-4f12-a394-90b769281c65" 00:20:28.979 ], 00:20:28.979 "product_name": "NVMe disk", 00:20:28.979 "block_size": 4096, 00:20:28.979 "num_blocks": 1310720, 00:20:28.979 "uuid": "50080acb-eeb4-4f12-a394-90b769281c65", 00:20:28.979 "numa_id": -1, 00:20:28.979 "assigned_rate_limits": { 00:20:28.979 "rw_ios_per_sec": 0, 00:20:28.979 "rw_mbytes_per_sec": 0, 00:20:28.979 "r_mbytes_per_sec": 0, 00:20:28.979 "w_mbytes_per_sec": 0 00:20:28.979 }, 00:20:28.979 "claimed": true, 00:20:28.979 "claim_type": "read_many_write_one", 00:20:28.979 "zoned": false, 00:20:28.979 "supported_io_types": { 00:20:28.979 "read": true, 00:20:28.979 "write": true, 00:20:28.979 "unmap": true, 00:20:28.979 "flush": true, 00:20:28.979 "reset": true, 00:20:28.979 "nvme_admin": true, 00:20:28.979 "nvme_io": true, 00:20:28.979 "nvme_io_md": false, 00:20:28.979 "write_zeroes": true, 00:20:28.979 "zcopy": false, 00:20:28.979 "get_zone_info": false, 00:20:28.979 "zone_management": false, 00:20:28.979 "zone_append": false, 00:20:28.979 "compare": true, 00:20:28.979 "compare_and_write": false, 00:20:28.979 "abort": true, 00:20:28.979 "seek_hole": false, 00:20:28.979 "seek_data": false, 00:20:28.979 "copy": true, 00:20:28.979 "nvme_iov_md": false 00:20:28.979 }, 00:20:28.979 "driver_specific": { 00:20:28.979 "nvme": [ 00:20:28.979 { 00:20:28.979 "pci_address": "0000:00:11.0", 00:20:28.979 "trid": { 00:20:28.979 "trtype": "PCIe", 00:20:28.979 "traddr": "0000:00:11.0" 00:20:28.979 }, 00:20:28.979 "ctrlr_data": { 00:20:28.979 "cntlid": 0, 00:20:28.979 "vendor_id": "0x1b36", 00:20:28.979 "model_number": "QEMU NVMe Ctrl", 00:20:28.979 "serial_number": "12341", 00:20:28.979 "firmware_revision": "8.0.0", 00:20:28.979 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:28.979 "oacs": { 00:20:28.979 "security": 0, 00:20:28.979 "format": 1, 00:20:28.979 "firmware": 0, 00:20:28.979 "ns_manage": 1 00:20:28.979 }, 00:20:28.979 "multi_ctrlr": false, 00:20:28.979 "ana_reporting": false 00:20:28.979 }, 00:20:28.979 "vs": { 00:20:28.979 "nvme_version": "1.4" 00:20:28.979 }, 00:20:28.980 "ns_data": { 00:20:28.980 "id": 1, 00:20:28.980 "can_share": false 00:20:28.980 } 00:20:28.980 } 00:20:28.980 ], 00:20:28.980 "mp_policy": "active_passive" 00:20:28.980 } 00:20:28.980 } 00:20:28.980 ]' 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:28.980 00:43:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:28.980 00:43:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:28.980 00:43:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:28.980 00:43:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:28.980 00:43:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:28.980 00:43:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:29.241 00:43:05 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c3ccb55b-1325-44cb-a8a0-0cfe56325f2f 00:20:29.241 00:43:05 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:29.241 00:43:05 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3ccb55b-1325-44cb-a8a0-0cfe56325f2f 00:20:29.502 00:43:06 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:29.762 00:43:06 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5730e268-fd55-41b2-acf6-6f97aa1104f8 00:20:29.762 00:43:06 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5730e268-fd55-41b2-acf6-6f97aa1104f8 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:30.024 00:43:06 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:30.024 { 00:20:30.024 "name": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:30.024 "aliases": [ 00:20:30.024 "lvs/nvme0n1p0" 00:20:30.024 ], 00:20:30.024 "product_name": "Logical Volume", 00:20:30.024 "block_size": 4096, 00:20:30.024 "num_blocks": 26476544, 00:20:30.024 "uuid": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:30.024 "assigned_rate_limits": { 00:20:30.024 "rw_ios_per_sec": 0, 00:20:30.024 "rw_mbytes_per_sec": 0, 00:20:30.024 "r_mbytes_per_sec": 0, 00:20:30.024 "w_mbytes_per_sec": 0 00:20:30.024 }, 00:20:30.024 "claimed": false, 00:20:30.024 "zoned": false, 00:20:30.024 "supported_io_types": { 00:20:30.024 "read": true, 00:20:30.024 "write": true, 00:20:30.024 "unmap": true, 00:20:30.024 "flush": false, 00:20:30.024 "reset": true, 00:20:30.024 "nvme_admin": false, 00:20:30.024 "nvme_io": false, 00:20:30.024 "nvme_io_md": false, 00:20:30.024 "write_zeroes": true, 00:20:30.024 "zcopy": false, 00:20:30.024 "get_zone_info": false, 00:20:30.024 "zone_management": false, 00:20:30.024 "zone_append": false, 00:20:30.024 "compare": false, 00:20:30.024 "compare_and_write": false, 00:20:30.024 "abort": false, 00:20:30.024 "seek_hole": true, 00:20:30.024 "seek_data": true, 00:20:30.024 "copy": false, 00:20:30.024 "nvme_iov_md": false 00:20:30.024 }, 00:20:30.024 "driver_specific": { 00:20:30.024 "lvol": { 00:20:30.024 "lvol_store_uuid": "5730e268-fd55-41b2-acf6-6f97aa1104f8", 00:20:30.024 "base_bdev": "nvme0n1", 00:20:30.024 "thin_provision": true, 00:20:30.024 "num_allocated_clusters": 0, 00:20:30.024 "snapshot": false, 00:20:30.024 "clone": false, 00:20:30.024 "esnap_clone": false 00:20:30.024 } 00:20:30.024 } 00:20:30.024 } 00:20:30.024 ]' 00:20:30.024 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:30.285 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:30.286 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:30.286 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:30.286 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:30.286 00:43:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:30.286 00:43:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:30.286 00:43:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:30.286 00:43:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:30.547 00:43:07 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:30.547 00:43:07 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:30.547 00:43:07 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.547 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.547 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:30.547 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:30.547 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:30.547 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:30.808 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:30.808 { 00:20:30.809 "name": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:30.809 "aliases": [ 00:20:30.809 "lvs/nvme0n1p0" 00:20:30.809 ], 00:20:30.809 "product_name": "Logical Volume", 00:20:30.809 "block_size": 4096, 00:20:30.809 "num_blocks": 26476544, 00:20:30.809 "uuid": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:30.809 "assigned_rate_limits": { 00:20:30.809 "rw_ios_per_sec": 0, 00:20:30.809 "rw_mbytes_per_sec": 0, 00:20:30.809 "r_mbytes_per_sec": 0, 00:20:30.809 "w_mbytes_per_sec": 0 00:20:30.809 }, 00:20:30.809 "claimed": false, 00:20:30.809 "zoned": false, 00:20:30.809 "supported_io_types": { 00:20:30.809 "read": true, 00:20:30.809 "write": true, 00:20:30.809 "unmap": true, 00:20:30.809 "flush": false, 00:20:30.809 "reset": true, 00:20:30.809 "nvme_admin": false, 00:20:30.809 "nvme_io": false, 00:20:30.809 "nvme_io_md": false, 00:20:30.809 "write_zeroes": true, 00:20:30.809 "zcopy": false, 00:20:30.809 "get_zone_info": false, 00:20:30.809 "zone_management": false, 00:20:30.809 "zone_append": false, 00:20:30.809 "compare": false, 00:20:30.809 "compare_and_write": false, 00:20:30.809 "abort": false, 00:20:30.809 "seek_hole": true, 00:20:30.809 "seek_data": true, 00:20:30.809 "copy": false, 00:20:30.809 "nvme_iov_md": false 00:20:30.809 }, 00:20:30.809 "driver_specific": { 00:20:30.809 "lvol": { 00:20:30.809 "lvol_store_uuid": "5730e268-fd55-41b2-acf6-6f97aa1104f8", 00:20:30.809 "base_bdev": "nvme0n1", 00:20:30.809 "thin_provision": true, 00:20:30.809 "num_allocated_clusters": 0, 00:20:30.809 "snapshot": false, 00:20:30.809 "clone": false, 00:20:30.809 "esnap_clone": false 00:20:30.809 } 00:20:30.809 } 00:20:30.809 } 00:20:30.809 ]' 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:30.809 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:30.809 00:43:07 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:30.809 00:43:07 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:31.071 00:43:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:31.071 00:43:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:31.071 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:31.071 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:31.071 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:31.071 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:31.071 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c4616646-8a76-4ea4-8ab3-5c484aec27f5 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:31.333 { 00:20:31.333 "name": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:31.333 "aliases": [ 00:20:31.333 "lvs/nvme0n1p0" 00:20:31.333 ], 00:20:31.333 "product_name": "Logical Volume", 00:20:31.333 "block_size": 4096, 00:20:31.333 "num_blocks": 26476544, 00:20:31.333 "uuid": "c4616646-8a76-4ea4-8ab3-5c484aec27f5", 00:20:31.333 "assigned_rate_limits": { 00:20:31.333 "rw_ios_per_sec": 0, 00:20:31.333 "rw_mbytes_per_sec": 0, 00:20:31.333 "r_mbytes_per_sec": 0, 00:20:31.333 "w_mbytes_per_sec": 0 00:20:31.333 }, 00:20:31.333 "claimed": false, 00:20:31.333 "zoned": false, 00:20:31.333 "supported_io_types": { 00:20:31.333 "read": true, 00:20:31.333 "write": true, 00:20:31.333 "unmap": true, 00:20:31.333 "flush": false, 00:20:31.333 "reset": true, 00:20:31.333 "nvme_admin": false, 00:20:31.333 "nvme_io": false, 00:20:31.333 "nvme_io_md": false, 00:20:31.333 "write_zeroes": true, 00:20:31.333 "zcopy": false, 00:20:31.333 "get_zone_info": false, 00:20:31.333 "zone_management": false, 00:20:31.333 "zone_append": false, 00:20:31.333 "compare": false, 00:20:31.333 "compare_and_write": false, 00:20:31.333 "abort": false, 00:20:31.333 "seek_hole": true, 00:20:31.333 "seek_data": true, 00:20:31.333 "copy": false, 00:20:31.333 "nvme_iov_md": false 00:20:31.333 }, 00:20:31.333 "driver_specific": { 00:20:31.333 "lvol": { 00:20:31.333 "lvol_store_uuid": "5730e268-fd55-41b2-acf6-6f97aa1104f8", 00:20:31.333 "base_bdev": "nvme0n1", 00:20:31.333 "thin_provision": true, 00:20:31.333 "num_allocated_clusters": 0, 00:20:31.333 "snapshot": false, 00:20:31.333 "clone": false, 00:20:31.333 "esnap_clone": false 00:20:31.333 } 00:20:31.333 } 00:20:31.333 } 00:20:31.333 ]' 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:31.333 00:43:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c4616646-8a76-4ea4-8ab3-5c484aec27f5 --l2p_dram_limit 10' 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:31.333 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:31.333 00:43:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c4616646-8a76-4ea4-8ab3-5c484aec27f5 --l2p_dram_limit 10 -c nvc0n1p0 00:20:31.596 [2024-11-27 00:43:08.126727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.126765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.596 [2024-11-27 00:43:08.126776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.596 [2024-11-27 00:43:08.126783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.126827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.126838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.596 [2024-11-27 00:43:08.126844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:31.596 [2024-11-27 00:43:08.126861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.126877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.596 [2024-11-27 00:43:08.127107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.596 [2024-11-27 00:43:08.127124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.127133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.596 [2024-11-27 00:43:08.127140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:20:31.596 [2024-11-27 00:43:08.127147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.127595] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:20:31.596 [2024-11-27 00:43:08.128527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.128551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:31.596 [2024-11-27 00:43:08.128561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:31.596 [2024-11-27 00:43:08.128568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.133241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.133269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.596 [2024-11-27 00:43:08.133281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.627 ms 00:20:31.596 [2024-11-27 00:43:08.133287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.133346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.133352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.596 [2024-11-27 00:43:08.133360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:31.596 [2024-11-27 00:43:08.133366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.133403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.133411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.596 [2024-11-27 00:43:08.133418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:31.596 [2024-11-27 00:43:08.133424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.133442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.596 [2024-11-27 00:43:08.134699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.134815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.596 [2024-11-27 00:43:08.134826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:20:31.596 [2024-11-27 00:43:08.134833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.134871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.596 [2024-11-27 00:43:08.134880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.596 [2024-11-27 00:43:08.134886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:31.596 [2024-11-27 00:43:08.134896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.596 [2024-11-27 00:43:08.134909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:31.596 [2024-11-27 00:43:08.135021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.596 [2024-11-27 00:43:08.135031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.596 [2024-11-27 00:43:08.135041] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.596 [2024-11-27 00:43:08.135054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.596 [2024-11-27 00:43:08.135062] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.596 [2024-11-27 00:43:08.135072] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.596 [2024-11-27 00:43:08.135080] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.596 [2024-11-27 00:43:08.135085] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.596 [2024-11-27 00:43:08.135092] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.597 [2024-11-27 00:43:08.135099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.597 [2024-11-27 00:43:08.135106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.597 [2024-11-27 00:43:08.135112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:31.597 [2024-11-27 00:43:08.135119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.597 [2024-11-27 00:43:08.135189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.597 [2024-11-27 00:43:08.135200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.597 [2024-11-27 00:43:08.135206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:31.597 [2024-11-27 00:43:08.135215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.597 [2024-11-27 00:43:08.135289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.597 [2024-11-27 00:43:08.135298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.597 [2024-11-27 00:43:08.135308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.597 [2024-11-27 00:43:08.135329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.597 [2024-11-27 00:43:08.135348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.597 [2024-11-27 00:43:08.135359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.597 [2024-11-27 00:43:08.135366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.597 [2024-11-27 00:43:08.135373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.597 [2024-11-27 00:43:08.135381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.597 [2024-11-27 00:43:08.135387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:31.597 [2024-11-27 00:43:08.135395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.597 [2024-11-27 00:43:08.135409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.597 [2024-11-27 00:43:08.135429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.597 [2024-11-27 00:43:08.135450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.597 [2024-11-27 00:43:08.135471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.597 [2024-11-27 00:43:08.135493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.597 [2024-11-27 00:43:08.135512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.597 [2024-11-27 00:43:08.135526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.597 [2024-11-27 00:43:08.135533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:31.597 [2024-11-27 00:43:08.135539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.597 [2024-11-27 00:43:08.135546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.597 [2024-11-27 00:43:08.135551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:31.597 [2024-11-27 00:43:08.135559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.597 [2024-11-27 00:43:08.135574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:31.597 [2024-11-27 00:43:08.135579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.597 [2024-11-27 00:43:08.135594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.597 [2024-11-27 00:43:08.135604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.597 [2024-11-27 00:43:08.135623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.597 [2024-11-27 00:43:08.135629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.597 [2024-11-27 00:43:08.135636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.597 [2024-11-27 00:43:08.135642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.597 [2024-11-27 00:43:08.135650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.597 [2024-11-27 00:43:08.135656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.597 [2024-11-27 00:43:08.135665] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.597 [2024-11-27 00:43:08.135677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.597 [2024-11-27 00:43:08.135692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:31.597 [2024-11-27 00:43:08.135700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:31.597 [2024-11-27 00:43:08.135706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:31.597 [2024-11-27 00:43:08.135714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:31.597 [2024-11-27 00:43:08.135720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:31.597 [2024-11-27 00:43:08.135729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:31.597 [2024-11-27 00:43:08.135736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:31.597 [2024-11-27 00:43:08.135744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:31.597 [2024-11-27 00:43:08.135749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:31.597 [2024-11-27 00:43:08.135783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.597 [2024-11-27 00:43:08.135789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.597 [2024-11-27 00:43:08.135802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.598 [2024-11-27 00:43:08.135808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.598 [2024-11-27 00:43:08.135814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.598 [2024-11-27 00:43:08.135821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.598 [2024-11-27 00:43:08.135829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.598 [2024-11-27 00:43:08.135838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:20:31.598 [2024-11-27 00:43:08.135843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.598 [2024-11-27 00:43:08.135884] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:31.598 [2024-11-27 00:43:08.135892] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:35.818 [2024-11-27 00:43:11.852236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.852547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:35.819 [2024-11-27 00:43:11.852581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3716.329 ms 00:20:35.819 [2024-11-27 00:43:11.852591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.866847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.866916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.819 [2024-11-27 00:43:11.866933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.129 ms 00:20:35.819 [2024-11-27 00:43:11.866945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.867104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.867118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.819 [2024-11-27 00:43:11.867130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:35.819 [2024-11-27 00:43:11.867138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.879254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.879301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.819 [2024-11-27 00:43:11.879316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.075 ms 00:20:35.819 [2024-11-27 00:43:11.879326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.879363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.879372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.819 [2024-11-27 00:43:11.879383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.819 [2024-11-27 00:43:11.879391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.879921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.879962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.819 [2024-11-27 00:43:11.879977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:20:35.819 [2024-11-27 00:43:11.879986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.880112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.880126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.819 [2024-11-27 00:43:11.880138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:35.819 [2024-11-27 00:43:11.880147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.887906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.887946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.819 [2024-11-27 00:43:11.887959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:20:35.819 [2024-11-27 00:43:11.887968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:11.910529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:35.819 [2024-11-27 00:43:11.914546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:11.914767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.819 [2024-11-27 00:43:11.914790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.507 ms 00:20:35.819 [2024-11-27 00:43:11.914802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.006102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.006175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:35.819 [2024-11-27 00:43:12.006190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.252 ms 00:20:35.819 [2024-11-27 00:43:12.006205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.006443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.006461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.819 [2024-11-27 00:43:12.006476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:35.819 [2024-11-27 00:43:12.006486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.013311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.013507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:35.819 [2024-11-27 00:43:12.013533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.786 ms 00:20:35.819 [2024-11-27 00:43:12.013544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.019787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.019881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:35.819 [2024-11-27 00:43:12.019896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.835 ms 00:20:35.819 [2024-11-27 00:43:12.019907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.020276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.020293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.819 [2024-11-27 00:43:12.020303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:35.819 [2024-11-27 00:43:12.020315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.068453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.068525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:35.819 [2024-11-27 00:43:12.068544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.085 ms 00:20:35.819 [2024-11-27 00:43:12.068555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.076441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.076502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:35.819 [2024-11-27 00:43:12.076514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.788 ms 00:20:35.819 [2024-11-27 00:43:12.076525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.083456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.083515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:35.819 [2024-11-27 00:43:12.083526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:20:35.819 [2024-11-27 00:43:12.083536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.090381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.090607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.819 [2024-11-27 00:43:12.090626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:20:35.819 [2024-11-27 00:43:12.090639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.090778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.090820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.819 [2024-11-27 00:43:12.090831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:35.819 [2024-11-27 00:43:12.090842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.090967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.090989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.819 [2024-11-27 00:43:12.090997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:35.819 [2024-11-27 00:43:12.091010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.092213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3964.911 ms, result 0 00:20:35.819 { 00:20:35.819 "name": "ftl0", 00:20:35.819 "uuid": "2f54237f-b41a-4fe3-9f71-78ff667aded7" 00:20:35.819 } 00:20:35.819 00:43:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:35.819 00:43:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:35.819 00:43:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:35.819 00:43:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:35.819 [2024-11-27 00:43:12.541209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.541475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:35.819 [2024-11-27 00:43:12.541508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.819 [2024-11-27 00:43:12.541519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.541560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.819 [2024-11-27 00:43:12.542370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.542419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:35.819 [2024-11-27 00:43:12.542433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:20:35.819 [2024-11-27 00:43:12.542445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.542720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.542736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:35.819 [2024-11-27 00:43:12.542749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:20:35.819 [2024-11-27 00:43:12.542764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.546036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.546065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:35.819 [2024-11-27 00:43:12.546076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:20:35.819 [2024-11-27 00:43:12.546087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.552342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.552564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:35.819 [2024-11-27 00:43:12.552587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.237 ms 00:20:35.819 [2024-11-27 00:43:12.552602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.819 [2024-11-27 00:43:12.555786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.819 [2024-11-27 00:43:12.556023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:35.819 [2024-11-27 00:43:12.556044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:20:35.820 [2024-11-27 00:43:12.556055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.562437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.562508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:35.820 [2024-11-27 00:43:12.562526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:20:35.820 [2024-11-27 00:43:12.562537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.562696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.562719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:35.820 [2024-11-27 00:43:12.562729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:35.820 [2024-11-27 00:43:12.562740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.565555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.565615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:35.820 [2024-11-27 00:43:12.565626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:20:35.820 [2024-11-27 00:43:12.565636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.567957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.568017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:35.820 [2024-11-27 00:43:12.568027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:20:35.820 [2024-11-27 00:43:12.568037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.570219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.570306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:35.820 [2024-11-27 00:43:12.570318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:20:35.820 [2024-11-27 00:43:12.570329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.572972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.820 [2024-11-27 00:43:12.573030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:35.820 [2024-11-27 00:43:12.573040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:20:35.820 [2024-11-27 00:43:12.573051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.820 [2024-11-27 00:43:12.573096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:35.820 [2024-11-27 00:43:12.573115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:35.820 [2024-11-27 00:43:12.573764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.573995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:35.821 [2024-11-27 00:43:12.574111] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:35.821 [2024-11-27 00:43:12.574120] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:20:35.821 [2024-11-27 00:43:12.574131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:35.821 [2024-11-27 00:43:12.574139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:35.821 [2024-11-27 00:43:12.574148] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:35.821 [2024-11-27 00:43:12.574157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:35.821 [2024-11-27 00:43:12.574172] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:35.821 [2024-11-27 00:43:12.574181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:35.821 [2024-11-27 00:43:12.574192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:35.821 [2024-11-27 00:43:12.574198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:35.821 [2024-11-27 00:43:12.574207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:35.821 [2024-11-27 00:43:12.574216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.821 [2024-11-27 00:43:12.574225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:35.821 [2024-11-27 00:43:12.574251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:20:35.821 [2024-11-27 00:43:12.574266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.576769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.821 [2024-11-27 00:43:12.576817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:35.821 [2024-11-27 00:43:12.576831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.459 ms 00:20:35.821 [2024-11-27 00:43:12.576842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.577012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.821 [2024-11-27 00:43:12.577028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:35.821 [2024-11-27 00:43:12.577043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:35.821 [2024-11-27 00:43:12.577053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.585744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.821 [2024-11-27 00:43:12.585809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.821 [2024-11-27 00:43:12.585821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.821 [2024-11-27 00:43:12.585832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.585924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.821 [2024-11-27 00:43:12.585936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.821 [2024-11-27 00:43:12.585945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.821 [2024-11-27 00:43:12.585955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.586039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.821 [2024-11-27 00:43:12.586057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.821 [2024-11-27 00:43:12.586066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.821 [2024-11-27 00:43:12.586079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.586098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.821 [2024-11-27 00:43:12.586109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.821 [2024-11-27 00:43:12.586119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.821 [2024-11-27 00:43:12.586129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.821 [2024-11-27 00:43:12.600766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:35.821 [2024-11-27 00:43:12.600825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.821 [2024-11-27 00:43:12.600840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:35.821 [2024-11-27 00:43:12.600874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.082 [2024-11-27 00:43:12.612130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.082 [2024-11-27 00:43:12.612184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.082 [2024-11-27 00:43:12.612195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.083 [2024-11-27 00:43:12.612308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.083 [2024-11-27 00:43:12.612390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.083 [2024-11-27 00:43:12.612493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.083 [2024-11-27 00:43:12.612563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.083 [2024-11-27 00:43:12.612636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.083 [2024-11-27 00:43:12.612707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.083 [2024-11-27 00:43:12.612715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.083 [2024-11-27 00:43:12.612725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.083 [2024-11-27 00:43:12.612901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.628 ms, result 0 00:20:36.083 true 00:20:36.083 00:43:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88539 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88539 ']' 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88539 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88539 00:20:36.083 killing process with pid 88539 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88539' 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88539 00:20:36.083 00:43:12 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88539 00:20:42.679 00:43:18 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:45.985 262144+0 records in 00:20:45.985 262144+0 records out 00:20:45.985 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.13752 s, 260 MB/s 00:20:45.985 00:43:22 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:48.530 00:43:24 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:48.530 [2024-11-27 00:43:24.849706] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:20:48.530 [2024-11-27 00:43:24.849798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88758 ] 00:20:48.530 [2024-11-27 00:43:25.003933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.530 [2024-11-27 00:43:25.024660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.530 [2024-11-27 00:43:25.114574] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.530 [2024-11-27 00:43:25.114645] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.530 [2024-11-27 00:43:25.274614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.274672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:48.530 [2024-11-27 00:43:25.274688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.530 [2024-11-27 00:43:25.274700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.274755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.274765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.530 [2024-11-27 00:43:25.274774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:48.530 [2024-11-27 00:43:25.274782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.274808] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:48.530 [2024-11-27 00:43:25.275117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:48.530 [2024-11-27 00:43:25.275140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.275150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.530 [2024-11-27 00:43:25.275165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:48.530 [2024-11-27 00:43:25.275173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.276792] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:48.530 [2024-11-27 00:43:25.280427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.280479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:48.530 [2024-11-27 00:43:25.280502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:20:48.530 [2024-11-27 00:43:25.280513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.280585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.280599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:48.530 [2024-11-27 00:43:25.280609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:48.530 [2024-11-27 00:43:25.280616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.288963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.289003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.530 [2024-11-27 00:43:25.289021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.303 ms 00:20:48.530 [2024-11-27 00:43:25.289030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.289127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.289137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.530 [2024-11-27 00:43:25.289147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:48.530 [2024-11-27 00:43:25.289158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.289218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.289235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:48.530 [2024-11-27 00:43:25.289245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:48.530 [2024-11-27 00:43:25.289257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.289281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:48.530 [2024-11-27 00:43:25.291428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.530 [2024-11-27 00:43:25.291465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.530 [2024-11-27 00:43:25.291476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.152 ms 00:20:48.530 [2024-11-27 00:43:25.291484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.530 [2024-11-27 00:43:25.291518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.291527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:48.531 [2024-11-27 00:43:25.291536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:48.531 [2024-11-27 00:43:25.291549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.531 [2024-11-27 00:43:25.291574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:48.531 [2024-11-27 00:43:25.291594] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:48.531 [2024-11-27 00:43:25.291633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:48.531 [2024-11-27 00:43:25.291650] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:48.531 [2024-11-27 00:43:25.291760] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:48.531 [2024-11-27 00:43:25.291777] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:48.531 [2024-11-27 00:43:25.291795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:48.531 [2024-11-27 00:43:25.291805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:48.531 [2024-11-27 00:43:25.291815] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:48.531 [2024-11-27 00:43:25.291824] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:48.531 [2024-11-27 00:43:25.291837] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:48.531 [2024-11-27 00:43:25.291874] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:48.531 [2024-11-27 00:43:25.291884] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:48.531 [2024-11-27 00:43:25.291897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.291906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:48.531 [2024-11-27 00:43:25.291917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:20:48.531 [2024-11-27 00:43:25.291929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.531 [2024-11-27 00:43:25.292019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.292033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:48.531 [2024-11-27 00:43:25.292044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:48.531 [2024-11-27 00:43:25.292053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.531 [2024-11-27 00:43:25.292157] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:48.531 [2024-11-27 00:43:25.292170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:48.531 [2024-11-27 00:43:25.292185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:48.531 [2024-11-27 00:43:25.292219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:48.531 [2024-11-27 00:43:25.292244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.531 [2024-11-27 00:43:25.292265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:48.531 [2024-11-27 00:43:25.292273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:48.531 [2024-11-27 00:43:25.292281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.531 [2024-11-27 00:43:25.292289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:48.531 [2024-11-27 00:43:25.292299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:48.531 [2024-11-27 00:43:25.292308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:48.531 [2024-11-27 00:43:25.292325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:48.531 [2024-11-27 00:43:25.292354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:48.531 [2024-11-27 00:43:25.292377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:48.531 [2024-11-27 00:43:25.292407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:48.531 [2024-11-27 00:43:25.292432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:48.531 [2024-11-27 00:43:25.292456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.531 [2024-11-27 00:43:25.292471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:48.531 [2024-11-27 00:43:25.292479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:48.531 [2024-11-27 00:43:25.292487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.531 [2024-11-27 00:43:25.292494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:48.531 [2024-11-27 00:43:25.292502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:48.531 [2024-11-27 00:43:25.292509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:48.531 [2024-11-27 00:43:25.292525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:48.531 [2024-11-27 00:43:25.292536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292543] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:48.531 [2024-11-27 00:43:25.292554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:48.531 [2024-11-27 00:43:25.292566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.531 [2024-11-27 00:43:25.292585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:48.531 [2024-11-27 00:43:25.292592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:48.531 [2024-11-27 00:43:25.292599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:48.531 [2024-11-27 00:43:25.292609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:48.531 [2024-11-27 00:43:25.292616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:48.531 [2024-11-27 00:43:25.292623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:48.531 [2024-11-27 00:43:25.292633] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:48.531 [2024-11-27 00:43:25.292643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:48.531 [2024-11-27 00:43:25.292661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:48.531 [2024-11-27 00:43:25.292669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:48.531 [2024-11-27 00:43:25.292679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:48.531 [2024-11-27 00:43:25.292688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:48.531 [2024-11-27 00:43:25.292697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:48.531 [2024-11-27 00:43:25.292705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:48.531 [2024-11-27 00:43:25.292714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:48.531 [2024-11-27 00:43:25.292721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:48.531 [2024-11-27 00:43:25.292728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:48.531 [2024-11-27 00:43:25.292768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:48.531 [2024-11-27 00:43:25.292777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:48.531 [2024-11-27 00:43:25.292794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:48.531 [2024-11-27 00:43:25.292801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:48.531 [2024-11-27 00:43:25.292811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:48.531 [2024-11-27 00:43:25.292821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.292829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:48.531 [2024-11-27 00:43:25.292844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:20:48.531 [2024-11-27 00:43:25.292876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.531 [2024-11-27 00:43:25.307439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.307487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.531 [2024-11-27 00:43:25.307500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.509 ms 00:20:48.531 [2024-11-27 00:43:25.307508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.531 [2024-11-27 00:43:25.307600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.531 [2024-11-27 00:43:25.307615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.531 [2024-11-27 00:43:25.307624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:48.531 [2024-11-27 00:43:25.307631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.323628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.323677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.792 [2024-11-27 00:43:25.323693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.939 ms 00:20:48.792 [2024-11-27 00:43:25.323704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.323754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.323768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.792 [2024-11-27 00:43:25.323786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.792 [2024-11-27 00:43:25.323800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.324217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.324261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.792 [2024-11-27 00:43:25.324273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:48.792 [2024-11-27 00:43:25.324283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.324446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.324460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.792 [2024-11-27 00:43:25.324470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:48.792 [2024-11-27 00:43:25.324481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.330432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.330620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.792 [2024-11-27 00:43:25.330640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.927 ms 00:20:48.792 [2024-11-27 00:43:25.330651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.333610] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:48.792 [2024-11-27 00:43:25.333655] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:48.792 [2024-11-27 00:43:25.333671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.333682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:48.792 [2024-11-27 00:43:25.333693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:20:48.792 [2024-11-27 00:43:25.333703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.348918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.349030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:48.792 [2024-11-27 00:43:25.349081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.160 ms 00:20:48.792 [2024-11-27 00:43:25.349104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.350843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.350885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:48.792 [2024-11-27 00:43:25.350894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:20:48.792 [2024-11-27 00:43:25.350902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.352661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.352773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:48.792 [2024-11-27 00:43:25.352788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:20:48.792 [2024-11-27 00:43:25.352795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.353122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.353136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.792 [2024-11-27 00:43:25.353145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:48.792 [2024-11-27 00:43:25.353157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.371015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.371158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:48.792 [2024-11-27 00:43:25.371174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.841 ms 00:20:48.792 [2024-11-27 00:43:25.371182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.378624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:48.792 [2024-11-27 00:43:25.381030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.381058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.792 [2024-11-27 00:43:25.381074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.809 ms 00:20:48.792 [2024-11-27 00:43:25.381085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.381159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.381171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:48.792 [2024-11-27 00:43:25.381180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:48.792 [2024-11-27 00:43:25.381189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.381251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.381263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.792 [2024-11-27 00:43:25.381275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:48.792 [2024-11-27 00:43:25.381284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.381303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.381311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.792 [2024-11-27 00:43:25.381318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.792 [2024-11-27 00:43:25.381326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.381354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:48.792 [2024-11-27 00:43:25.381367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.381378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:48.792 [2024-11-27 00:43:25.381385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:48.792 [2024-11-27 00:43:25.381395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.385298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.385404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.792 [2024-11-27 00:43:25.385452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.887 ms 00:20:48.792 [2024-11-27 00:43:25.385475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.385786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.792 [2024-11-27 00:43:25.385870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.792 [2024-11-27 00:43:25.385898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:48.792 [2024-11-27 00:43:25.385922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.792 [2024-11-27 00:43:25.387280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.262 ms, result 0 00:20:49.735  [2024-11-27T00:43:27.466Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-27T00:43:28.409Z] Copying: 26/1024 [MB] (16 MBps) [2024-11-27T00:43:29.787Z] Copying: 43/1024 [MB] (16 MBps) [2024-11-27T00:43:30.730Z] Copying: 68/1024 [MB] (25 MBps) [2024-11-27T00:43:31.672Z] Copying: 87/1024 [MB] (18 MBps) [2024-11-27T00:43:32.617Z] Copying: 100/1024 [MB] (12 MBps) [2024-11-27T00:43:33.562Z] Copying: 111/1024 [MB] (11 MBps) [2024-11-27T00:43:34.507Z] Copying: 127/1024 [MB] (16 MBps) [2024-11-27T00:43:35.452Z] Copying: 144/1024 [MB] (16 MBps) [2024-11-27T00:43:36.855Z] Copying: 154/1024 [MB] (10 MBps) [2024-11-27T00:43:37.464Z] Copying: 166/1024 [MB] (11 MBps) [2024-11-27T00:43:38.406Z] Copying: 179/1024 [MB] (13 MBps) [2024-11-27T00:43:39.808Z] Copying: 199/1024 [MB] (19 MBps) [2024-11-27T00:43:40.753Z] Copying: 220/1024 [MB] (21 MBps) [2024-11-27T00:43:41.698Z] Copying: 234/1024 [MB] (13 MBps) [2024-11-27T00:43:42.642Z] Copying: 251/1024 [MB] (17 MBps) [2024-11-27T00:43:43.587Z] Copying: 268/1024 [MB] (16 MBps) [2024-11-27T00:43:44.533Z] Copying: 281/1024 [MB] (13 MBps) [2024-11-27T00:43:45.478Z] Copying: 304/1024 [MB] (22 MBps) [2024-11-27T00:43:46.424Z] Copying: 318/1024 [MB] (13 MBps) [2024-11-27T00:43:47.813Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-27T00:43:48.759Z] Copying: 349/1024 [MB] (19 MBps) [2024-11-27T00:43:49.702Z] Copying: 370/1024 [MB] (20 MBps) [2024-11-27T00:43:50.647Z] Copying: 387/1024 [MB] (17 MBps) [2024-11-27T00:43:51.590Z] Copying: 408/1024 [MB] (20 MBps) [2024-11-27T00:43:52.534Z] Copying: 438/1024 [MB] (30 MBps) [2024-11-27T00:43:53.478Z] Copying: 470/1024 [MB] (31 MBps) [2024-11-27T00:43:54.422Z] Copying: 490/1024 [MB] (20 MBps) [2024-11-27T00:43:55.808Z] Copying: 514/1024 [MB] (24 MBps) [2024-11-27T00:43:56.753Z] Copying: 534/1024 [MB] (19 MBps) [2024-11-27T00:43:57.699Z] Copying: 547/1024 [MB] (13 MBps) [2024-11-27T00:43:58.643Z] Copying: 571/1024 [MB] (23 MBps) [2024-11-27T00:43:59.585Z] Copying: 586/1024 [MB] (15 MBps) [2024-11-27T00:44:00.529Z] Copying: 600/1024 [MB] (13 MBps) [2024-11-27T00:44:01.473Z] Copying: 615/1024 [MB] (15 MBps) [2024-11-27T00:44:02.415Z] Copying: 636/1024 [MB] (20 MBps) [2024-11-27T00:44:03.803Z] Copying: 663/1024 [MB] (27 MBps) [2024-11-27T00:44:04.748Z] Copying: 679/1024 [MB] (15 MBps) [2024-11-27T00:44:05.693Z] Copying: 690/1024 [MB] (11 MBps) [2024-11-27T00:44:06.636Z] Copying: 708/1024 [MB] (17 MBps) [2024-11-27T00:44:07.581Z] Copying: 725/1024 [MB] (17 MBps) [2024-11-27T00:44:08.524Z] Copying: 745/1024 [MB] (19 MBps) [2024-11-27T00:44:09.565Z] Copying: 777/1024 [MB] (32 MBps) [2024-11-27T00:44:10.515Z] Copying: 815/1024 [MB] (37 MBps) [2024-11-27T00:44:11.459Z] Copying: 838/1024 [MB] (23 MBps) [2024-11-27T00:44:12.405Z] Copying: 867/1024 [MB] (28 MBps) [2024-11-27T00:44:13.790Z] Copying: 915/1024 [MB] (48 MBps) [2024-11-27T00:44:14.734Z] Copying: 934/1024 [MB] (18 MBps) [2024-11-27T00:44:15.678Z] Copying: 970/1024 [MB] (36 MBps) [2024-11-27T00:44:16.622Z] Copying: 993/1024 [MB] (23 MBps) [2024-11-27T00:44:17.565Z] Copying: 1006/1024 [MB] (12 MBps) [2024-11-27T00:44:17.565Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-27 00:44:17.209785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.209844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:40.778 [2024-11-27 00:44:17.209884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:40.778 [2024-11-27 00:44:17.209909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.209931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.778 [2024-11-27 00:44:17.210725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.210761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:40.778 [2024-11-27 00:44:17.210782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:21:40.778 [2024-11-27 00:44:17.210792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.213606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.213653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:40.778 [2024-11-27 00:44:17.213665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:21:40.778 [2024-11-27 00:44:17.213674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.233006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.233067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:40.778 [2024-11-27 00:44:17.233079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.303 ms 00:21:40.778 [2024-11-27 00:44:17.233088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.239284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.239324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:40.778 [2024-11-27 00:44:17.239336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:21:40.778 [2024-11-27 00:44:17.239344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.242241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.242303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:40.778 [2024-11-27 00:44:17.242314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:21:40.778 [2024-11-27 00:44:17.242322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.246829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.246904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:40.778 [2024-11-27 00:44:17.246916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.463 ms 00:21:40.778 [2024-11-27 00:44:17.246926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.247047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.247058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:40.778 [2024-11-27 00:44:17.247069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:40.778 [2024-11-27 00:44:17.247089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.250342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.250389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:40.778 [2024-11-27 00:44:17.250399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:21:40.778 [2024-11-27 00:44:17.250407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.253232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.253277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:40.778 [2024-11-27 00:44:17.253290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:21:40.778 [2024-11-27 00:44:17.253297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.255626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.255671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:40.778 [2024-11-27 00:44:17.255681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:21:40.778 [2024-11-27 00:44:17.255688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.258071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.778 [2024-11-27 00:44:17.258291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:40.778 [2024-11-27 00:44:17.258311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:21:40.778 [2024-11-27 00:44:17.258318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.778 [2024-11-27 00:44:17.258355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:40.778 [2024-11-27 00:44:17.258372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:40.778 [2024-11-27 00:44:17.258600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.258995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:40.779 [2024-11-27 00:44:17.259204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:40.779 [2024-11-27 00:44:17.259212] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:21:40.779 [2024-11-27 00:44:17.259220] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:40.779 [2024-11-27 00:44:17.259228] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:40.779 [2024-11-27 00:44:17.259237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:40.779 [2024-11-27 00:44:17.259245] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:40.779 [2024-11-27 00:44:17.259252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:40.779 [2024-11-27 00:44:17.259259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:40.779 [2024-11-27 00:44:17.259266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:40.779 [2024-11-27 00:44:17.259273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:40.779 [2024-11-27 00:44:17.259291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:40.779 [2024-11-27 00:44:17.259299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.779 [2024-11-27 00:44:17.259313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:40.779 [2024-11-27 00:44:17.259323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:21:40.779 [2024-11-27 00:44:17.259330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.779 [2024-11-27 00:44:17.261626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.779 [2024-11-27 00:44:17.261777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:40.779 [2024-11-27 00:44:17.261796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:21:40.779 [2024-11-27 00:44:17.261805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.779 [2024-11-27 00:44:17.261949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.779 [2024-11-27 00:44:17.261961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:40.779 [2024-11-27 00:44:17.261970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:40.779 [2024-11-27 00:44:17.261978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.779 [2024-11-27 00:44:17.269378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.269427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.780 [2024-11-27 00:44:17.269443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.269450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.269513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.269522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.780 [2024-11-27 00:44:17.269531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.269538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.269599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.269615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.780 [2024-11-27 00:44:17.269625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.269633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.269649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.269663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.780 [2024-11-27 00:44:17.269671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.269679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.284421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.284635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.780 [2024-11-27 00:44:17.284654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.284662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.295965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.780 [2024-11-27 00:44:17.296021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.780 [2024-11-27 00:44:17.296134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.780 [2024-11-27 00:44:17.296202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.780 [2024-11-27 00:44:17.296308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:40.780 [2024-11-27 00:44:17.296363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.780 [2024-11-27 00:44:17.296430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.780 [2024-11-27 00:44:17.296493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.780 [2024-11-27 00:44:17.296505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.780 [2024-11-27 00:44:17.296513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.780 [2024-11-27 00:44:17.296647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.822 ms, result 0 00:21:41.350 00:21:41.350 00:21:41.350 00:44:17 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:41.350 [2024-11-27 00:44:17.966162] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:21:41.350 [2024-11-27 00:44:17.966395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89308 ] 00:21:41.350 [2024-11-27 00:44:18.129216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.610 [2024-11-27 00:44:18.157983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.610 [2024-11-27 00:44:18.275614] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.610 [2024-11-27 00:44:18.276020] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.873 [2024-11-27 00:44:18.436227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.436284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:41.873 [2024-11-27 00:44:18.436299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.873 [2024-11-27 00:44:18.436313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.436369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.436381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.873 [2024-11-27 00:44:18.436394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:41.873 [2024-11-27 00:44:18.436402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.436429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:41.873 [2024-11-27 00:44:18.436706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:41.873 [2024-11-27 00:44:18.436731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.436740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.873 [2024-11-27 00:44:18.436751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:21:41.873 [2024-11-27 00:44:18.436759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.438465] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:41.873 [2024-11-27 00:44:18.442283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.442482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:41.873 [2024-11-27 00:44:18.442509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:21:41.873 [2024-11-27 00:44:18.442521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.442673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.442709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:41.873 [2024-11-27 00:44:18.442719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:41.873 [2024-11-27 00:44:18.442731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.450931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.450971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.873 [2024-11-27 00:44:18.450990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.153 ms 00:21:41.873 [2024-11-27 00:44:18.451002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.451099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.451109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.873 [2024-11-27 00:44:18.451118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:41.873 [2024-11-27 00:44:18.451129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.451191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.451209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:41.873 [2024-11-27 00:44:18.451219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:41.873 [2024-11-27 00:44:18.451230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.451253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.873 [2024-11-27 00:44:18.453291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.453486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.873 [2024-11-27 00:44:18.453504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.043 ms 00:21:41.873 [2024-11-27 00:44:18.453512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.453559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.453568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:41.873 [2024-11-27 00:44:18.453577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:41.873 [2024-11-27 00:44:18.453588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.873 [2024-11-27 00:44:18.453615] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:41.873 [2024-11-27 00:44:18.453642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:41.873 [2024-11-27 00:44:18.453679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:41.873 [2024-11-27 00:44:18.453701] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:41.873 [2024-11-27 00:44:18.453807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:41.873 [2024-11-27 00:44:18.453820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:41.873 [2024-11-27 00:44:18.453835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:41.873 [2024-11-27 00:44:18.453846] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:41.873 [2024-11-27 00:44:18.453881] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:41.873 [2024-11-27 00:44:18.453891] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:41.873 [2024-11-27 00:44:18.453899] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:41.873 [2024-11-27 00:44:18.453908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:41.873 [2024-11-27 00:44:18.453919] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:41.873 [2024-11-27 00:44:18.453928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.873 [2024-11-27 00:44:18.453936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:41.873 [2024-11-27 00:44:18.453944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:21:41.874 [2024-11-27 00:44:18.453955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.454046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.454058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:41.874 [2024-11-27 00:44:18.454071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:41.874 [2024-11-27 00:44:18.454079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.454183] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:41.874 [2024-11-27 00:44:18.454197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:41.874 [2024-11-27 00:44:18.454208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:41.874 [2024-11-27 00:44:18.454241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:41.874 [2024-11-27 00:44:18.454284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.874 [2024-11-27 00:44:18.454304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:41.874 [2024-11-27 00:44:18.454312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:41.874 [2024-11-27 00:44:18.454321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.874 [2024-11-27 00:44:18.454329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:41.874 [2024-11-27 00:44:18.454339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:41.874 [2024-11-27 00:44:18.454348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:41.874 [2024-11-27 00:44:18.454364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:41.874 [2024-11-27 00:44:18.454389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:41.874 [2024-11-27 00:44:18.454414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:41.874 [2024-11-27 00:44:18.454445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:41.874 [2024-11-27 00:44:18.454468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:41.874 [2024-11-27 00:44:18.454488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.874 [2024-11-27 00:44:18.454503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:41.874 [2024-11-27 00:44:18.454509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:41.874 [2024-11-27 00:44:18.454516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.874 [2024-11-27 00:44:18.454523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:41.874 [2024-11-27 00:44:18.454530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:41.874 [2024-11-27 00:44:18.454536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:41.874 [2024-11-27 00:44:18.454552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:41.874 [2024-11-27 00:44:18.454560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454567] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:41.874 [2024-11-27 00:44:18.454578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:41.874 [2024-11-27 00:44:18.454587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.874 [2024-11-27 00:44:18.454603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:41.874 [2024-11-27 00:44:18.454611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:41.874 [2024-11-27 00:44:18.454618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:41.874 [2024-11-27 00:44:18.454625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:41.874 [2024-11-27 00:44:18.454633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:41.874 [2024-11-27 00:44:18.454640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:41.874 [2024-11-27 00:44:18.454649] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:41.874 [2024-11-27 00:44:18.454658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:41.874 [2024-11-27 00:44:18.454679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:41.874 [2024-11-27 00:44:18.454686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:41.874 [2024-11-27 00:44:18.454695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:41.874 [2024-11-27 00:44:18.454702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:41.874 [2024-11-27 00:44:18.454710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:41.874 [2024-11-27 00:44:18.454717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:41.874 [2024-11-27 00:44:18.454726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:41.874 [2024-11-27 00:44:18.454733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:41.874 [2024-11-27 00:44:18.454739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:41.874 [2024-11-27 00:44:18.454775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:41.874 [2024-11-27 00:44:18.454783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:41.874 [2024-11-27 00:44:18.454800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:41.874 [2024-11-27 00:44:18.454809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:41.874 [2024-11-27 00:44:18.454819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:41.874 [2024-11-27 00:44:18.454826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.454834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:41.874 [2024-11-27 00:44:18.454848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:21:41.874 [2024-11-27 00:44:18.454874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.469049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.469093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.874 [2024-11-27 00:44:18.469105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.124 ms 00:21:41.874 [2024-11-27 00:44:18.469114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.469205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.469214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.874 [2024-11-27 00:44:18.469223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:41.874 [2024-11-27 00:44:18.469231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.492012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.492250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.874 [2024-11-27 00:44:18.492275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.720 ms 00:21:41.874 [2024-11-27 00:44:18.492287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.874 [2024-11-27 00:44:18.492344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.874 [2024-11-27 00:44:18.492358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.874 [2024-11-27 00:44:18.492369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.874 [2024-11-27 00:44:18.492388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.493020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.493066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.875 [2024-11-27 00:44:18.493081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:21:41.875 [2024-11-27 00:44:18.493103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.493282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.493297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.875 [2024-11-27 00:44:18.493307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:41.875 [2024-11-27 00:44:18.493318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.501276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.501333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.875 [2024-11-27 00:44:18.501344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.932 ms 00:21:41.875 [2024-11-27 00:44:18.501352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.505491] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:41.875 [2024-11-27 00:44:18.505542] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:41.875 [2024-11-27 00:44:18.505559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.505569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:41.875 [2024-11-27 00:44:18.505578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:21:41.875 [2024-11-27 00:44:18.505586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.521399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.521446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:41.875 [2024-11-27 00:44:18.521459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:21:41.875 [2024-11-27 00:44:18.521468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.524436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.524484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:41.875 [2024-11-27 00:44:18.524496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:21:41.875 [2024-11-27 00:44:18.524504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.527162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.527348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:41.875 [2024-11-27 00:44:18.527366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:21:41.875 [2024-11-27 00:44:18.527376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.527726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.527747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.875 [2024-11-27 00:44:18.527761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:41.875 [2024-11-27 00:44:18.527771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.551100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.551154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:41.875 [2024-11-27 00:44:18.551167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.299 ms 00:21:41.875 [2024-11-27 00:44:18.551176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.559187] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:41.875 [2024-11-27 00:44:18.562246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.562454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:41.875 [2024-11-27 00:44:18.562474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.012 ms 00:21:41.875 [2024-11-27 00:44:18.562491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.562568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.562579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:41.875 [2024-11-27 00:44:18.562590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:41.875 [2024-11-27 00:44:18.562598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.562673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.562690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:41.875 [2024-11-27 00:44:18.562699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:41.875 [2024-11-27 00:44:18.562708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.562737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.562747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:41.875 [2024-11-27 00:44:18.562755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:41.875 [2024-11-27 00:44:18.562763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.562806] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:41.875 [2024-11-27 00:44:18.562820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.562828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:41.875 [2024-11-27 00:44:18.562839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:41.875 [2024-11-27 00:44:18.562847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.568178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.568225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:41.875 [2024-11-27 00:44:18.568236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.286 ms 00:21:41.875 [2024-11-27 00:44:18.568245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.568334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.875 [2024-11-27 00:44:18.568344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:41.875 [2024-11-27 00:44:18.568357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:41.875 [2024-11-27 00:44:18.568369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.875 [2024-11-27 00:44:18.569554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.862 ms, result 0 00:21:43.261  [2024-11-27T00:44:20.991Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T00:44:21.935Z] Copying: 38/1024 [MB] (16 MBps) [2024-11-27T00:44:22.878Z] Copying: 51/1024 [MB] (13 MBps) [2024-11-27T00:44:23.821Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-27T00:44:24.763Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-27T00:44:26.150Z] Copying: 84/1024 [MB] (10 MBps) [2024-11-27T00:44:27.094Z] Copying: 101/1024 [MB] (17 MBps) [2024-11-27T00:44:28.039Z] Copying: 116/1024 [MB] (14 MBps) [2024-11-27T00:44:28.983Z] Copying: 135/1024 [MB] (19 MBps) [2024-11-27T00:44:29.926Z] Copying: 151/1024 [MB] (16 MBps) [2024-11-27T00:44:30.870Z] Copying: 165/1024 [MB] (13 MBps) [2024-11-27T00:44:31.843Z] Copying: 181/1024 [MB] (16 MBps) [2024-11-27T00:44:32.788Z] Copying: 199/1024 [MB] (18 MBps) [2024-11-27T00:44:34.173Z] Copying: 217/1024 [MB] (17 MBps) [2024-11-27T00:44:35.117Z] Copying: 228/1024 [MB] (11 MBps) [2024-11-27T00:44:36.062Z] Copying: 240/1024 [MB] (12 MBps) [2024-11-27T00:44:36.999Z] Copying: 259/1024 [MB] (18 MBps) [2024-11-27T00:44:37.944Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-27T00:44:38.888Z] Copying: 296/1024 [MB] (19 MBps) [2024-11-27T00:44:39.831Z] Copying: 315/1024 [MB] (18 MBps) [2024-11-27T00:44:40.895Z] Copying: 335/1024 [MB] (19 MBps) [2024-11-27T00:44:41.838Z] Copying: 351/1024 [MB] (16 MBps) [2024-11-27T00:44:42.784Z] Copying: 364/1024 [MB] (13 MBps) [2024-11-27T00:44:44.172Z] Copying: 379/1024 [MB] (14 MBps) [2024-11-27T00:44:45.116Z] Copying: 394/1024 [MB] (15 MBps) [2024-11-27T00:44:46.058Z] Copying: 406/1024 [MB] (11 MBps) [2024-11-27T00:44:47.004Z] Copying: 420/1024 [MB] (14 MBps) [2024-11-27T00:44:47.949Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-27T00:44:48.892Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-27T00:44:49.834Z] Copying: 456/1024 [MB] (13 MBps) [2024-11-27T00:44:50.778Z] Copying: 466/1024 [MB] (10 MBps) [2024-11-27T00:44:52.165Z] Copying: 485/1024 [MB] (18 MBps) [2024-11-27T00:44:53.110Z] Copying: 500/1024 [MB] (15 MBps) [2024-11-27T00:44:54.054Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-27T00:44:54.996Z] Copying: 534/1024 [MB] (16 MBps) [2024-11-27T00:44:55.937Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-27T00:44:56.882Z] Copying: 559/1024 [MB] (13 MBps) [2024-11-27T00:44:57.826Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-27T00:44:58.770Z] Copying: 597/1024 [MB] (21 MBps) [2024-11-27T00:45:00.156Z] Copying: 620/1024 [MB] (22 MBps) [2024-11-27T00:45:01.100Z] Copying: 637/1024 [MB] (17 MBps) [2024-11-27T00:45:02.045Z] Copying: 656/1024 [MB] (19 MBps) [2024-11-27T00:45:02.990Z] Copying: 678/1024 [MB] (21 MBps) [2024-11-27T00:45:03.934Z] Copying: 697/1024 [MB] (18 MBps) [2024-11-27T00:45:04.881Z] Copying: 715/1024 [MB] (18 MBps) [2024-11-27T00:45:05.825Z] Copying: 727/1024 [MB] (11 MBps) [2024-11-27T00:45:06.772Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-27T00:45:08.161Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-27T00:45:09.106Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-27T00:45:10.049Z] Copying: 773/1024 [MB] (13 MBps) [2024-11-27T00:45:10.995Z] Copying: 794/1024 [MB] (20 MBps) [2024-11-27T00:45:11.940Z] Copying: 811/1024 [MB] (17 MBps) [2024-11-27T00:45:12.982Z] Copying: 829/1024 [MB] (17 MBps) [2024-11-27T00:45:13.928Z] Copying: 841/1024 [MB] (12 MBps) [2024-11-27T00:45:14.872Z] Copying: 855/1024 [MB] (14 MBps) [2024-11-27T00:45:15.817Z] Copying: 877/1024 [MB] (22 MBps) [2024-11-27T00:45:16.760Z] Copying: 900/1024 [MB] (22 MBps) [2024-11-27T00:45:18.149Z] Copying: 916/1024 [MB] (16 MBps) [2024-11-27T00:45:19.096Z] Copying: 937/1024 [MB] (20 MBps) [2024-11-27T00:45:20.039Z] Copying: 959/1024 [MB] (22 MBps) [2024-11-27T00:45:20.985Z] Copying: 975/1024 [MB] (16 MBps) [2024-11-27T00:45:21.930Z] Copying: 995/1024 [MB] (20 MBps) [2024-11-27T00:45:22.873Z] Copying: 1010/1024 [MB] (14 MBps) [2024-11-27T00:45:22.873Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 00:45:22.774561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.774632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:46.086 [2024-11-27 00:45:22.774654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:46.086 [2024-11-27 00:45:22.774663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.774686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:46.086 [2024-11-27 00:45:22.775592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.775668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:46.086 [2024-11-27 00:45:22.775691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:22:46.086 [2024-11-27 00:45:22.775710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.776208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.776323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:46.086 [2024-11-27 00:45:22.776348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:46.086 [2024-11-27 00:45:22.776381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.785740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.785777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:46.086 [2024-11-27 00:45:22.786766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.327 ms 00:22:46.086 [2024-11-27 00:45:22.786778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.792942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.792974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:46.086 [2024-11-27 00:45:22.792984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.137 ms 00:22:46.086 [2024-11-27 00:45:22.793005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.795942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.796150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:46.086 [2024-11-27 00:45:22.796170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:22:46.086 [2024-11-27 00:45:22.796178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.802174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.802218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:46.086 [2024-11-27 00:45:22.802230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.881 ms 00:22:46.086 [2024-11-27 00:45:22.802240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.802380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.802391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:46.086 [2024-11-27 00:45:22.802415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:46.086 [2024-11-27 00:45:22.802428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.806167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.806362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:46.086 [2024-11-27 00:45:22.806382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:22:46.086 [2024-11-27 00:45:22.806389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.809496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.809535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:46.086 [2024-11-27 00:45:22.809545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:22:46.086 [2024-11-27 00:45:22.809552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.812110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.812150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:46.086 [2024-11-27 00:45:22.812160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:22:46.086 [2024-11-27 00:45:22.812168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.086 [2024-11-27 00:45:22.814759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.086 [2024-11-27 00:45:22.814947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:46.086 [2024-11-27 00:45:22.815124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.510 ms 00:22:46.087 [2024-11-27 00:45:22.815166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.087 [2024-11-27 00:45:22.815222] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:46.087 [2024-11-27 00:45:22.815250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.815998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:46.087 [2024-11-27 00:45:22.816316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:46.088 [2024-11-27 00:45:22.816423] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:46.088 [2024-11-27 00:45:22.816431] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:22:46.088 [2024-11-27 00:45:22.816440] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:46.088 [2024-11-27 00:45:22.816447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:46.088 [2024-11-27 00:45:22.816454] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:46.088 [2024-11-27 00:45:22.816462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:46.088 [2024-11-27 00:45:22.816469] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:46.088 [2024-11-27 00:45:22.816477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:46.088 [2024-11-27 00:45:22.816494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:46.088 [2024-11-27 00:45:22.816507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:46.088 [2024-11-27 00:45:22.816516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:46.088 [2024-11-27 00:45:22.816525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.088 [2024-11-27 00:45:22.816533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:46.088 [2024-11-27 00:45:22.816542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:22:46.088 [2024-11-27 00:45:22.816549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.819197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.088 [2024-11-27 00:45:22.819349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:46.088 [2024-11-27 00:45:22.819401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:22:46.088 [2024-11-27 00:45:22.819425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.819573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.088 [2024-11-27 00:45:22.819597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:46.088 [2024-11-27 00:45:22.819619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:46.088 [2024-11-27 00:45:22.819638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.827712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.827912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.088 [2024-11-27 00:45:22.827971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.828003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.828079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.828103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.088 [2024-11-27 00:45:22.828154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.828177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.828241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.828265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.088 [2024-11-27 00:45:22.828314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.828344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.828382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.828402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.088 [2024-11-27 00:45:22.828422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.828468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.842873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.843033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.088 [2024-11-27 00:45:22.843086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.843116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.853619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.853780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.088 [2024-11-27 00:45:22.853796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.853806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.853914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.853926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.088 [2024-11-27 00:45:22.853935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.853943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.853978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.853993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.088 [2024-11-27 00:45:22.854002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.854015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.854091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.854101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.088 [2024-11-27 00:45:22.854109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.854121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.854150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.854160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:46.088 [2024-11-27 00:45:22.854171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.854179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.854218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.854227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.088 [2024-11-27 00:45:22.854235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.854243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.854296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:46.088 [2024-11-27 00:45:22.854310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.088 [2024-11-27 00:45:22.854319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:46.088 [2024-11-27 00:45:22.854327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.088 [2024-11-27 00:45:22.854455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.864 ms, result 0 00:22:46.349 00:22:46.349 00:22:46.349 00:45:23 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:48.894 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:48.894 00:45:25 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:48.894 [2024-11-27 00:45:25.314947] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:22:48.894 [2024-11-27 00:45:25.315202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90001 ] 00:22:48.894 [2024-11-27 00:45:25.471887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.894 [2024-11-27 00:45:25.493414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.894 [2024-11-27 00:45:25.583661] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.894 [2024-11-27 00:45:25.583733] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:49.158 [2024-11-27 00:45:25.741135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.741182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:49.158 [2024-11-27 00:45:25.741196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.158 [2024-11-27 00:45:25.741204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.741254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.741264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.158 [2024-11-27 00:45:25.741273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:49.158 [2024-11-27 00:45:25.741281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.741300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:49.158 [2024-11-27 00:45:25.741550] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:49.158 [2024-11-27 00:45:25.741568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.741576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.158 [2024-11-27 00:45:25.741587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:49.158 [2024-11-27 00:45:25.741595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.742985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:49.158 [2024-11-27 00:45:25.745992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.746029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:49.158 [2024-11-27 00:45:25.746046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:22:49.158 [2024-11-27 00:45:25.746061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.746115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.746127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:49.158 [2024-11-27 00:45:25.746136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:49.158 [2024-11-27 00:45:25.746147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.751797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.751830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.158 [2024-11-27 00:45:25.751846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.599 ms 00:22:49.158 [2024-11-27 00:45:25.751872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.751962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.751973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.158 [2024-11-27 00:45:25.751983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:49.158 [2024-11-27 00:45:25.751992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.752029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.752038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:49.158 [2024-11-27 00:45:25.752046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:49.158 [2024-11-27 00:45:25.752056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.752078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:49.158 [2024-11-27 00:45:25.753613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.753753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.158 [2024-11-27 00:45:25.753769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:22:49.158 [2024-11-27 00:45:25.753776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.753809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.753817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:49.158 [2024-11-27 00:45:25.753826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:49.158 [2024-11-27 00:45:25.753842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.753884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:49.158 [2024-11-27 00:45:25.753907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:49.158 [2024-11-27 00:45:25.753942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:49.158 [2024-11-27 00:45:25.753961] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:49.158 [2024-11-27 00:45:25.754069] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:49.158 [2024-11-27 00:45:25.754081] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:49.158 [2024-11-27 00:45:25.754094] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:49.158 [2024-11-27 00:45:25.754104] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:49.158 [2024-11-27 00:45:25.754114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:49.158 [2024-11-27 00:45:25.754122] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:49.158 [2024-11-27 00:45:25.754130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:49.158 [2024-11-27 00:45:25.754141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:49.158 [2024-11-27 00:45:25.754152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:49.158 [2024-11-27 00:45:25.754160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.754168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:49.158 [2024-11-27 00:45:25.754176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:49.158 [2024-11-27 00:45:25.754183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.754288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.158 [2024-11-27 00:45:25.754302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:49.158 [2024-11-27 00:45:25.754311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:49.158 [2024-11-27 00:45:25.754320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.158 [2024-11-27 00:45:25.754427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:49.158 [2024-11-27 00:45:25.754439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:49.158 [2024-11-27 00:45:25.754449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.158 [2024-11-27 00:45:25.754463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.158 [2024-11-27 00:45:25.754473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:49.158 [2024-11-27 00:45:25.754482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:49.158 [2024-11-27 00:45:25.754490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:49.158 [2024-11-27 00:45:25.754499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:49.158 [2024-11-27 00:45:25.754508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:49.158 [2024-11-27 00:45:25.754516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.158 [2024-11-27 00:45:25.754524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:49.158 [2024-11-27 00:45:25.754536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:49.158 [2024-11-27 00:45:25.754543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.158 [2024-11-27 00:45:25.754551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:49.158 [2024-11-27 00:45:25.754560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:49.158 [2024-11-27 00:45:25.754568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.158 [2024-11-27 00:45:25.754576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:49.159 [2024-11-27 00:45:25.754584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:49.159 [2024-11-27 00:45:25.754610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:49.159 [2024-11-27 00:45:25.754632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:49.159 [2024-11-27 00:45:25.754655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:49.159 [2024-11-27 00:45:25.754683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:49.159 [2024-11-27 00:45:25.754702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.159 [2024-11-27 00:45:25.754716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:49.159 [2024-11-27 00:45:25.754722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:49.159 [2024-11-27 00:45:25.754729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.159 [2024-11-27 00:45:25.754735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:49.159 [2024-11-27 00:45:25.754741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:49.159 [2024-11-27 00:45:25.754748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:49.159 [2024-11-27 00:45:25.754761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:49.159 [2024-11-27 00:45:25.754767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:49.159 [2024-11-27 00:45:25.754786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:49.159 [2024-11-27 00:45:25.754797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.159 [2024-11-27 00:45:25.754812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:49.159 [2024-11-27 00:45:25.754818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:49.159 [2024-11-27 00:45:25.754825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:49.159 [2024-11-27 00:45:25.754832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:49.159 [2024-11-27 00:45:25.754840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:49.159 [2024-11-27 00:45:25.754846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:49.159 [2024-11-27 00:45:25.754871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:49.159 [2024-11-27 00:45:25.754882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.754890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:49.159 [2024-11-27 00:45:25.754898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:49.159 [2024-11-27 00:45:25.754905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:49.159 [2024-11-27 00:45:25.754912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:49.159 [2024-11-27 00:45:25.754923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:49.159 [2024-11-27 00:45:25.754930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:49.159 [2024-11-27 00:45:25.754938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:49.159 [2024-11-27 00:45:25.754945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:49.159 [2024-11-27 00:45:25.754952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:49.159 [2024-11-27 00:45:25.754960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.754967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.754975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.754982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.754989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:49.159 [2024-11-27 00:45:25.754997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:49.159 [2024-11-27 00:45:25.755005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.755013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:49.159 [2024-11-27 00:45:25.755020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:49.159 [2024-11-27 00:45:25.755027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:49.159 [2024-11-27 00:45:25.755035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:49.159 [2024-11-27 00:45:25.755045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.755055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:49.159 [2024-11-27 00:45:25.755063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:22:49.159 [2024-11-27 00:45:25.755073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.765559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.765698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.159 [2024-11-27 00:45:25.765753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.437 ms 00:22:49.159 [2024-11-27 00:45:25.765782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.765893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.765917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:49.159 [2024-11-27 00:45:25.765939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:49.159 [2024-11-27 00:45:25.765958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.785380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.785584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:49.159 [2024-11-27 00:45:25.785687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.357 ms 00:22:49.159 [2024-11-27 00:45:25.785721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.785794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.785829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:49.159 [2024-11-27 00:45:25.785877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:49.159 [2024-11-27 00:45:25.785907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.786521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.786667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:49.159 [2024-11-27 00:45:25.786734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:22:49.159 [2024-11-27 00:45:25.786766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.786993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.787182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:49.159 [2024-11-27 00:45:25.787218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:22:49.159 [2024-11-27 00:45:25.787246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.794042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.794175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.159 [2024-11-27 00:45:25.794228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.748 ms 00:22:49.159 [2024-11-27 00:45:25.794251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.797689] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:49.159 [2024-11-27 00:45:25.797838] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:49.159 [2024-11-27 00:45:25.797923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.797944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:49.159 [2024-11-27 00:45:25.797964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:22:49.159 [2024-11-27 00:45:25.797983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.813549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.159 [2024-11-27 00:45:25.813709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:49.159 [2024-11-27 00:45:25.813767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.510 ms 00:22:49.159 [2024-11-27 00:45:25.813790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.159 [2024-11-27 00:45:25.816398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.816547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:49.160 [2024-11-27 00:45:25.816600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:22:49.160 [2024-11-27 00:45:25.816622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.819260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.819425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:49.160 [2024-11-27 00:45:25.819486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:22:49.160 [2024-11-27 00:45:25.819508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.820035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.820102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:49.160 [2024-11-27 00:45:25.820180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:22:49.160 [2024-11-27 00:45:25.820204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.845530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.845716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:49.160 [2024-11-27 00:45:25.845775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:22:49.160 [2024-11-27 00:45:25.845797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.854317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:49.160 [2024-11-27 00:45:25.857619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.857759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:49.160 [2024-11-27 00:45:25.857814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.770 ms 00:22:49.160 [2024-11-27 00:45:25.857844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.857963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.857998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:49.160 [2024-11-27 00:45:25.858022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:49.160 [2024-11-27 00:45:25.858047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.858194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.858231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:49.160 [2024-11-27 00:45:25.858253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:49.160 [2024-11-27 00:45:25.858298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.858378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.858406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:49.160 [2024-11-27 00:45:25.858427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:49.160 [2024-11-27 00:45:25.858447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.858500] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:49.160 [2024-11-27 00:45:25.858562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.858574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:49.160 [2024-11-27 00:45:25.858588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:49.160 [2024-11-27 00:45:25.858596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.864111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.864281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:49.160 [2024-11-27 00:45:25.864311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.491 ms 00:22:49.160 [2024-11-27 00:45:25.864320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.864495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.160 [2024-11-27 00:45:25.864521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:49.160 [2024-11-27 00:45:25.864532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:49.160 [2024-11-27 00:45:25.864546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.160 [2024-11-27 00:45:25.865772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.092 ms, result 0 00:22:50.100  [2024-11-27T00:45:28.273Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-27T00:45:29.217Z] Copying: 34/1024 [MB] (18 MBps) [2024-11-27T00:45:30.160Z] Copying: 59/1024 [MB] (24 MBps) [2024-11-27T00:45:31.105Z] Copying: 87/1024 [MB] (27 MBps) [2024-11-27T00:45:32.053Z] Copying: 103/1024 [MB] (16 MBps) [2024-11-27T00:45:32.998Z] Copying: 124/1024 [MB] (21 MBps) [2024-11-27T00:45:33.941Z] Copying: 139/1024 [MB] (14 MBps) [2024-11-27T00:45:34.886Z] Copying: 153/1024 [MB] (14 MBps) [2024-11-27T00:45:36.273Z] Copying: 166/1024 [MB] (12 MBps) [2024-11-27T00:45:37.218Z] Copying: 187/1024 [MB] (21 MBps) [2024-11-27T00:45:38.163Z] Copying: 204/1024 [MB] (16 MBps) [2024-11-27T00:45:39.109Z] Copying: 226/1024 [MB] (22 MBps) [2024-11-27T00:45:40.053Z] Copying: 242/1024 [MB] (16 MBps) [2024-11-27T00:45:40.998Z] Copying: 258/1024 [MB] (15 MBps) [2024-11-27T00:45:41.941Z] Copying: 270/1024 [MB] (12 MBps) [2024-11-27T00:45:42.884Z] Copying: 292/1024 [MB] (21 MBps) [2024-11-27T00:45:44.271Z] Copying: 320/1024 [MB] (28 MBps) [2024-11-27T00:45:44.897Z] Copying: 335/1024 [MB] (14 MBps) [2024-11-27T00:45:46.284Z] Copying: 351/1024 [MB] (16 MBps) [2024-11-27T00:45:47.224Z] Copying: 365/1024 [MB] (13 MBps) [2024-11-27T00:45:48.168Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-27T00:45:49.113Z] Copying: 388/1024 [MB] (12 MBps) [2024-11-27T00:45:50.054Z] Copying: 404/1024 [MB] (16 MBps) [2024-11-27T00:45:50.998Z] Copying: 419/1024 [MB] (15 MBps) [2024-11-27T00:45:51.943Z] Copying: 432/1024 [MB] (12 MBps) [2024-11-27T00:45:52.886Z] Copying: 446/1024 [MB] (13 MBps) [2024-11-27T00:45:54.279Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-27T00:45:55.223Z] Copying: 486/1024 [MB] (17 MBps) [2024-11-27T00:45:56.168Z] Copying: 509/1024 [MB] (23 MBps) [2024-11-27T00:45:57.121Z] Copying: 539/1024 [MB] (30 MBps) [2024-11-27T00:45:58.069Z] Copying: 570/1024 [MB] (30 MBps) [2024-11-27T00:45:59.014Z] Copying: 595/1024 [MB] (25 MBps) [2024-11-27T00:45:59.955Z] Copying: 622/1024 [MB] (27 MBps) [2024-11-27T00:46:00.894Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-27T00:46:02.283Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-27T00:46:03.226Z] Copying: 668/1024 [MB] (14 MBps) [2024-11-27T00:46:04.170Z] Copying: 685/1024 [MB] (17 MBps) [2024-11-27T00:46:05.114Z] Copying: 703/1024 [MB] (17 MBps) [2024-11-27T00:46:06.059Z] Copying: 722/1024 [MB] (19 MBps) [2024-11-27T00:46:07.004Z] Copying: 735/1024 [MB] (13 MBps) [2024-11-27T00:46:07.949Z] Copying: 754/1024 [MB] (18 MBps) [2024-11-27T00:46:08.895Z] Copying: 771/1024 [MB] (16 MBps) [2024-11-27T00:46:10.283Z] Copying: 799/1024 [MB] (28 MBps) [2024-11-27T00:46:11.228Z] Copying: 831/1024 [MB] (31 MBps) [2024-11-27T00:46:12.172Z] Copying: 861/1024 [MB] (30 MBps) [2024-11-27T00:46:13.119Z] Copying: 888/1024 [MB] (26 MBps) [2024-11-27T00:46:14.065Z] Copying: 901/1024 [MB] (12 MBps) [2024-11-27T00:46:15.012Z] Copying: 915/1024 [MB] (14 MBps) [2024-11-27T00:46:15.957Z] Copying: 928/1024 [MB] (12 MBps) [2024-11-27T00:46:16.961Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-27T00:46:17.910Z] Copying: 962/1024 [MB] (16 MBps) [2024-11-27T00:46:19.297Z] Copying: 978/1024 [MB] (16 MBps) [2024-11-27T00:46:20.240Z] Copying: 1011/1024 [MB] (32 MBps) [2024-11-27T00:46:20.240Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-27T00:46:20.240Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 00:46:20.231246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.453 [2024-11-27 00:46:20.231338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:43.453 [2024-11-27 00:46:20.231357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:43.453 [2024-11-27 00:46:20.231368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.453 [2024-11-27 00:46:20.234586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:43.453 [2024-11-27 00:46:20.236165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.453 [2024-11-27 00:46:20.236224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:43.453 [2024-11-27 00:46:20.236241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.516 ms 00:23:43.453 [2024-11-27 00:46:20.236251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.714 [2024-11-27 00:46:20.249077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.714 [2024-11-27 00:46:20.249285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:43.714 [2024-11-27 00:46:20.249310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.122 ms 00:23:43.714 [2024-11-27 00:46:20.249322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.714 [2024-11-27 00:46:20.273639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.714 [2024-11-27 00:46:20.273710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:43.714 [2024-11-27 00:46:20.273723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:23:43.714 [2024-11-27 00:46:20.273740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.714 [2024-11-27 00:46:20.279998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.714 [2024-11-27 00:46:20.280193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:43.714 [2024-11-27 00:46:20.280227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.215 ms 00:23:43.714 [2024-11-27 00:46:20.280236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.714 [2024-11-27 00:46:20.283377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.714 [2024-11-27 00:46:20.283561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:43.714 [2024-11-27 00:46:20.283582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.075 ms 00:23:43.714 [2024-11-27 00:46:20.283590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.714 [2024-11-27 00:46:20.288633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.714 [2024-11-27 00:46:20.288689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:43.714 [2024-11-27 00:46:20.288700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.001 ms 00:23:43.714 [2024-11-27 00:46:20.288718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.511507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.978 [2024-11-27 00:46:20.511578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:43.978 [2024-11-27 00:46:20.511609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 222.737 ms 00:23:43.978 [2024-11-27 00:46:20.511619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.514777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.978 [2024-11-27 00:46:20.515003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:43.978 [2024-11-27 00:46:20.515024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:23:43.978 [2024-11-27 00:46:20.515032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.517823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.978 [2024-11-27 00:46:20.517888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:43.978 [2024-11-27 00:46:20.517900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:23:43.978 [2024-11-27 00:46:20.517907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.520232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.978 [2024-11-27 00:46:20.520288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:43.978 [2024-11-27 00:46:20.520299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.279 ms 00:23:43.978 [2024-11-27 00:46:20.520307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.522757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.978 [2024-11-27 00:46:20.522811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:43.978 [2024-11-27 00:46:20.522822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:23:43.978 [2024-11-27 00:46:20.522829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.978 [2024-11-27 00:46:20.522894] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:43.978 [2024-11-27 00:46:20.522911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106752 / 261120 wr_cnt: 1 state: open 00:23:43.978 [2024-11-27 00:46:20.522922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.522994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:43.978 [2024-11-27 00:46:20.523456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:43.979 [2024-11-27 00:46:20.523767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:43.979 [2024-11-27 00:46:20.523775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:23:43.979 [2024-11-27 00:46:20.523784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106752 00:23:43.979 [2024-11-27 00:46:20.523798] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107712 00:23:43.979 [2024-11-27 00:46:20.523807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106752 00:23:43.979 [2024-11-27 00:46:20.523816] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:23:43.979 [2024-11-27 00:46:20.523823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:43.979 [2024-11-27 00:46:20.523831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:43.979 [2024-11-27 00:46:20.523839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:43.979 [2024-11-27 00:46:20.523866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:43.979 [2024-11-27 00:46:20.523874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:43.979 [2024-11-27 00:46:20.523882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.979 [2024-11-27 00:46:20.523891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:43.979 [2024-11-27 00:46:20.523900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:23:43.979 [2024-11-27 00:46:20.523908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.526409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.979 [2024-11-27 00:46:20.526444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:43.979 [2024-11-27 00:46:20.526457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:23:43.979 [2024-11-27 00:46:20.526464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.526601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.979 [2024-11-27 00:46:20.526612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:43.979 [2024-11-27 00:46:20.526621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:23:43.979 [2024-11-27 00:46:20.526633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.534468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.534521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.979 [2024-11-27 00:46:20.534532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.534542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.534599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.534614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.979 [2024-11-27 00:46:20.534623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.534643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.534706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.534720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.979 [2024-11-27 00:46:20.534729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.534737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.534754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.534763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.979 [2024-11-27 00:46:20.534771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.534779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.548955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.549006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.979 [2024-11-27 00:46:20.549018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.549036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.979 [2024-11-27 00:46:20.559390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.979 [2024-11-27 00:46:20.559474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.979 [2024-11-27 00:46:20.559561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.979 [2024-11-27 00:46:20.559676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:43.979 [2024-11-27 00:46:20.559736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.979 [2024-11-27 00:46:20.559804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.979 [2024-11-27 00:46:20.559814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.979 [2024-11-27 00:46:20.559888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.979 [2024-11-27 00:46:20.559901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.980 [2024-11-27 00:46:20.559909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.980 [2024-11-27 00:46:20.559918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.980 [2024-11-27 00:46:20.560066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.797 ms, result 0 00:23:44.555 00:23:44.555 00:23:44.817 00:46:21 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:44.817 [2024-11-27 00:46:21.419763] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:23:44.817 [2024-11-27 00:46:21.420618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90580 ] 00:23:44.817 [2024-11-27 00:46:21.591656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.078 [2024-11-27 00:46:21.617285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.078 [2024-11-27 00:46:21.726986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.078 [2024-11-27 00:46:21.727301] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:45.341 [2024-11-27 00:46:21.888292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.341 [2024-11-27 00:46:21.888353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:45.341 [2024-11-27 00:46:21.888376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:45.341 [2024-11-27 00:46:21.888390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.888449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.888460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.342 [2024-11-27 00:46:21.888470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:45.342 [2024-11-27 00:46:21.888478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.888503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:45.342 [2024-11-27 00:46:21.888789] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:45.342 [2024-11-27 00:46:21.888812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.888822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.342 [2024-11-27 00:46:21.888838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:45.342 [2024-11-27 00:46:21.888846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.890661] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:45.342 [2024-11-27 00:46:21.894791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.894849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:45.342 [2024-11-27 00:46:21.894893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:23:45.342 [2024-11-27 00:46:21.894910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.894991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.895005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:45.342 [2024-11-27 00:46:21.895014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:45.342 [2024-11-27 00:46:21.895022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.903495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.903542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.342 [2024-11-27 00:46:21.903558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:23:45.342 [2024-11-27 00:46:21.903567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.903681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.903692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.342 [2024-11-27 00:46:21.903702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:45.342 [2024-11-27 00:46:21.903714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.903774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.903784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:45.342 [2024-11-27 00:46:21.903796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:45.342 [2024-11-27 00:46:21.903807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.903830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:45.342 [2024-11-27 00:46:21.906023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.906171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.342 [2024-11-27 00:46:21.906232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:23:45.342 [2024-11-27 00:46:21.906256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.906327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.906351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:45.342 [2024-11-27 00:46:21.906374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:45.342 [2024-11-27 00:46:21.906400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.906435] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:45.342 [2024-11-27 00:46:21.906470] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:45.342 [2024-11-27 00:46:21.906607] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:45.342 [2024-11-27 00:46:21.906649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:45.342 [2024-11-27 00:46:21.906787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:45.342 [2024-11-27 00:46:21.906827] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:45.342 [2024-11-27 00:46:21.906881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:45.342 [2024-11-27 00:46:21.906955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:45.342 [2024-11-27 00:46:21.906991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:45.342 [2024-11-27 00:46:21.907020] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:45.342 [2024-11-27 00:46:21.907047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:45.342 [2024-11-27 00:46:21.907068] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:45.342 [2024-11-27 00:46:21.907087] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:45.342 [2024-11-27 00:46:21.907613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.907672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:45.342 [2024-11-27 00:46:21.907713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:23:45.342 [2024-11-27 00:46:21.907734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.907883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.342 [2024-11-27 00:46:21.907912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:45.342 [2024-11-27 00:46:21.907947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:45.342 [2024-11-27 00:46:21.907969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.342 [2024-11-27 00:46:21.908170] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:45.342 [2024-11-27 00:46:21.908373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:45.342 [2024-11-27 00:46:21.908397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.342 [2024-11-27 00:46:21.908419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:45.342 [2024-11-27 00:46:21.908435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:45.342 [2024-11-27 00:46:21.908450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:45.342 [2024-11-27 00:46:21.908457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.342 [2024-11-27 00:46:21.908470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:45.342 [2024-11-27 00:46:21.908477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:45.342 [2024-11-27 00:46:21.908484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.342 [2024-11-27 00:46:21.908491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:45.342 [2024-11-27 00:46:21.908502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:45.342 [2024-11-27 00:46:21.908510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:45.342 [2024-11-27 00:46:21.908525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:45.342 [2024-11-27 00:46:21.908531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:45.342 [2024-11-27 00:46:21.908545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:45.342 [2024-11-27 00:46:21.908552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.342 [2024-11-27 00:46:21.908558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:45.343 [2024-11-27 00:46:21.908565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.343 [2024-11-27 00:46:21.908578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:45.343 [2024-11-27 00:46:21.908584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.343 [2024-11-27 00:46:21.908597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:45.343 [2024-11-27 00:46:21.908604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.343 [2024-11-27 00:46:21.908619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:45.343 [2024-11-27 00:46:21.908626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.343 [2024-11-27 00:46:21.908641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:45.343 [2024-11-27 00:46:21.908648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:45.343 [2024-11-27 00:46:21.908655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.343 [2024-11-27 00:46:21.908662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:45.343 [2024-11-27 00:46:21.908670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:45.343 [2024-11-27 00:46:21.908678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:45.343 [2024-11-27 00:46:21.908691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:45.343 [2024-11-27 00:46:21.908699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908705] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:45.343 [2024-11-27 00:46:21.908717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:45.343 [2024-11-27 00:46:21.908725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.343 [2024-11-27 00:46:21.908739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.343 [2024-11-27 00:46:21.908747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:45.343 [2024-11-27 00:46:21.908754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:45.343 [2024-11-27 00:46:21.908761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:45.343 [2024-11-27 00:46:21.908768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:45.343 [2024-11-27 00:46:21.908774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:45.343 [2024-11-27 00:46:21.908780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:45.343 [2024-11-27 00:46:21.908791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:45.343 [2024-11-27 00:46:21.908804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:45.343 [2024-11-27 00:46:21.908820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:45.343 [2024-11-27 00:46:21.908827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:45.343 [2024-11-27 00:46:21.908834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:45.343 [2024-11-27 00:46:21.908841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:45.343 [2024-11-27 00:46:21.908848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:45.343 [2024-11-27 00:46:21.908877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:45.343 [2024-11-27 00:46:21.908890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:45.343 [2024-11-27 00:46:21.908900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:45.343 [2024-11-27 00:46:21.908908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:45.343 [2024-11-27 00:46:21.908956] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:45.343 [2024-11-27 00:46:21.908967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:45.343 [2024-11-27 00:46:21.908985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:45.343 [2024-11-27 00:46:21.908996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:45.343 [2024-11-27 00:46:21.909004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:45.343 [2024-11-27 00:46:21.909015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.909024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:45.343 [2024-11-27 00:46:21.909037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:23:45.343 [2024-11-27 00:46:21.909054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.922644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.922797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.343 [2024-11-27 00:46:21.922868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.525 ms 00:23:45.343 [2024-11-27 00:46:21.922904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.923012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.923037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:45.343 [2024-11-27 00:46:21.923059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:45.343 [2024-11-27 00:46:21.923078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.943190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.943385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.343 [2024-11-27 00:46:21.943453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.028 ms 00:23:45.343 [2024-11-27 00:46:21.943480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.943544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.943573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.343 [2024-11-27 00:46:21.943597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.343 [2024-11-27 00:46:21.943619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.944188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.944327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.343 [2024-11-27 00:46:21.944393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:23:45.343 [2024-11-27 00:46:21.944422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.343 [2024-11-27 00:46:21.944601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.343 [2024-11-27 00:46:21.944630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.343 [2024-11-27 00:46:21.944702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:45.343 [2024-11-27 00:46:21.944730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.952046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.952196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.344 [2024-11-27 00:46:21.952253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.276 ms 00:23:45.344 [2024-11-27 00:46:21.952617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.956396] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:45.344 [2024-11-27 00:46:21.956445] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:45.344 [2024-11-27 00:46:21.956463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.956478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:45.344 [2024-11-27 00:46:21.956488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:23:45.344 [2024-11-27 00:46:21.956496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.972584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.972627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:45.344 [2024-11-27 00:46:21.972640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.036 ms 00:23:45.344 [2024-11-27 00:46:21.972649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.975709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.975901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:45.344 [2024-11-27 00:46:21.975919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:23:45.344 [2024-11-27 00:46:21.975927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.978924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.979096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:45.344 [2024-11-27 00:46:21.979115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:23:45.344 [2024-11-27 00:46:21.979123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:21.979462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:21.979477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:45.344 [2024-11-27 00:46:21.979490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:23:45.344 [2024-11-27 00:46:21.979499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.004179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.004227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:45.344 [2024-11-27 00:46:22.004239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.656 ms 00:23:45.344 [2024-11-27 00:46:22.004256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.012214] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:45.344 [2024-11-27 00:46:22.015222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.015270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:45.344 [2024-11-27 00:46:22.015283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.918 ms 00:23:45.344 [2024-11-27 00:46:22.015296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.015365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.015376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:45.344 [2024-11-27 00:46:22.015389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:45.344 [2024-11-27 00:46:22.015397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.017088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.017131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:45.344 [2024-11-27 00:46:22.017142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:23:45.344 [2024-11-27 00:46:22.017150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.017175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.017184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:45.344 [2024-11-27 00:46:22.017194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:45.344 [2024-11-27 00:46:22.017202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.017237] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:45.344 [2024-11-27 00:46:22.017248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.017259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:45.344 [2024-11-27 00:46:22.017270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:45.344 [2024-11-27 00:46:22.017278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.022485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.022532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:45.344 [2024-11-27 00:46:22.022545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:23:45.344 [2024-11-27 00:46:22.022564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.022644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.344 [2024-11-27 00:46:22.022655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:45.344 [2024-11-27 00:46:22.022665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:45.344 [2024-11-27 00:46:22.022682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.344 [2024-11-27 00:46:22.023891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.126 ms, result 0 00:23:46.729  [2024-11-27T00:46:24.457Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T00:46:25.398Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-27T00:46:26.342Z] Copying: 40/1024 [MB] (12 MBps) [2024-11-27T00:46:27.287Z] Copying: 66/1024 [MB] (26 MBps) [2024-11-27T00:46:28.230Z] Copying: 84/1024 [MB] (18 MBps) [2024-11-27T00:46:29.617Z] Copying: 99/1024 [MB] (14 MBps) [2024-11-27T00:46:30.562Z] Copying: 115/1024 [MB] (16 MBps) [2024-11-27T00:46:31.506Z] Copying: 131/1024 [MB] (15 MBps) [2024-11-27T00:46:32.451Z] Copying: 156/1024 [MB] (24 MBps) [2024-11-27T00:46:33.394Z] Copying: 169/1024 [MB] (12 MBps) [2024-11-27T00:46:34.338Z] Copying: 181/1024 [MB] (12 MBps) [2024-11-27T00:46:35.284Z] Copying: 196/1024 [MB] (14 MBps) [2024-11-27T00:46:36.228Z] Copying: 209/1024 [MB] (13 MBps) [2024-11-27T00:46:37.614Z] Copying: 222/1024 [MB] (12 MBps) [2024-11-27T00:46:38.557Z] Copying: 245/1024 [MB] (22 MBps) [2024-11-27T00:46:39.500Z] Copying: 270/1024 [MB] (25 MBps) [2024-11-27T00:46:40.445Z] Copying: 295/1024 [MB] (24 MBps) [2024-11-27T00:46:41.389Z] Copying: 317/1024 [MB] (22 MBps) [2024-11-27T00:46:42.334Z] Copying: 336/1024 [MB] (18 MBps) [2024-11-27T00:46:43.279Z] Copying: 357/1024 [MB] (20 MBps) [2024-11-27T00:46:44.223Z] Copying: 376/1024 [MB] (19 MBps) [2024-11-27T00:46:45.610Z] Copying: 393/1024 [MB] (17 MBps) [2024-11-27T00:46:46.554Z] Copying: 418/1024 [MB] (24 MBps) [2024-11-27T00:46:47.503Z] Copying: 442/1024 [MB] (23 MBps) [2024-11-27T00:46:48.498Z] Copying: 461/1024 [MB] (19 MBps) [2024-11-27T00:46:49.451Z] Copying: 479/1024 [MB] (17 MBps) [2024-11-27T00:46:50.394Z] Copying: 499/1024 [MB] (20 MBps) [2024-11-27T00:46:51.338Z] Copying: 519/1024 [MB] (20 MBps) [2024-11-27T00:46:52.282Z] Copying: 539/1024 [MB] (20 MBps) [2024-11-27T00:46:53.226Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-27T00:46:54.614Z] Copying: 564/1024 [MB] (14 MBps) [2024-11-27T00:46:55.557Z] Copying: 580/1024 [MB] (15 MBps) [2024-11-27T00:46:56.498Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-27T00:46:57.441Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-27T00:46:58.384Z] Copying: 621/1024 [MB] (16 MBps) [2024-11-27T00:46:59.326Z] Copying: 637/1024 [MB] (15 MBps) [2024-11-27T00:47:00.268Z] Copying: 650/1024 [MB] (13 MBps) [2024-11-27T00:47:01.652Z] Copying: 670/1024 [MB] (19 MBps) [2024-11-27T00:47:02.222Z] Copying: 684/1024 [MB] (14 MBps) [2024-11-27T00:47:03.607Z] Copying: 704/1024 [MB] (19 MBps) [2024-11-27T00:47:04.548Z] Copying: 716/1024 [MB] (12 MBps) [2024-11-27T00:47:05.492Z] Copying: 730/1024 [MB] (14 MBps) [2024-11-27T00:47:06.434Z] Copying: 745/1024 [MB] (14 MBps) [2024-11-27T00:47:07.375Z] Copying: 768/1024 [MB] (22 MBps) [2024-11-27T00:47:08.318Z] Copying: 796/1024 [MB] (28 MBps) [2024-11-27T00:47:09.259Z] Copying: 816/1024 [MB] (20 MBps) [2024-11-27T00:47:10.642Z] Copying: 831/1024 [MB] (14 MBps) [2024-11-27T00:47:11.214Z] Copying: 852/1024 [MB] (21 MBps) [2024-11-27T00:47:12.601Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-27T00:47:13.543Z] Copying: 878/1024 [MB] (14 MBps) [2024-11-27T00:47:14.485Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-27T00:47:15.428Z] Copying: 922/1024 [MB] (21 MBps) [2024-11-27T00:47:16.373Z] Copying: 943/1024 [MB] (21 MBps) [2024-11-27T00:47:17.318Z] Copying: 955/1024 [MB] (11 MBps) [2024-11-27T00:47:18.261Z] Copying: 975/1024 [MB] (19 MBps) [2024-11-27T00:47:19.286Z] Copying: 996/1024 [MB] (21 MBps) [2024-11-27T00:47:19.575Z] Copying: 1019/1024 [MB] (23 MBps) [2024-11-27T00:47:19.837Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 00:47:19.601452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.601551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:43.050 [2024-11-27 00:47:19.601576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:43.050 [2024-11-27 00:47:19.601587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.601614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:43.050 [2024-11-27 00:47:19.602504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.602538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:43.050 [2024-11-27 00:47:19.602562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:24:43.050 [2024-11-27 00:47:19.602574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.602841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.603086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:43.050 [2024-11-27 00:47:19.603109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:24:43.050 [2024-11-27 00:47:19.603120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.611595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.611652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:43.050 [2024-11-27 00:47:19.611667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.446 ms 00:24:43.050 [2024-11-27 00:47:19.611688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.618999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.619047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:43.050 [2024-11-27 00:47:19.619059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.258 ms 00:24:43.050 [2024-11-27 00:47:19.619070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.621920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.622133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:43.050 [2024-11-27 00:47:19.622154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:24:43.050 [2024-11-27 00:47:19.622163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.050 [2024-11-27 00:47:19.627399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.050 [2024-11-27 00:47:19.627456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:43.050 [2024-11-27 00:47:19.627468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:24:43.050 [2024-11-27 00:47:19.627486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.313 [2024-11-27 00:47:19.865413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.313 [2024-11-27 00:47:19.865604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:43.313 [2024-11-27 00:47:19.865625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 237.876 ms 00:24:43.313 [2024-11-27 00:47:19.865647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.313 [2024-11-27 00:47:19.869309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.313 [2024-11-27 00:47:19.869363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:43.313 [2024-11-27 00:47:19.869375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:24:43.313 [2024-11-27 00:47:19.869384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.313 [2024-11-27 00:47:19.872337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.313 [2024-11-27 00:47:19.872528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:43.313 [2024-11-27 00:47:19.872547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.908 ms 00:24:43.313 [2024-11-27 00:47:19.872555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.314 [2024-11-27 00:47:19.875021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.314 [2024-11-27 00:47:19.875074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:43.314 [2024-11-27 00:47:19.875084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.351 ms 00:24:43.314 [2024-11-27 00:47:19.875092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.314 [2024-11-27 00:47:19.877440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.314 [2024-11-27 00:47:19.877489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:43.314 [2024-11-27 00:47:19.877499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.274 ms 00:24:43.314 [2024-11-27 00:47:19.877507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.314 [2024-11-27 00:47:19.877548] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:43.314 [2024-11-27 00:47:19.877566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:43.314 [2024-11-27 00:47:19.877576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.877999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:43.314 [2024-11-27 00:47:19.878104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:43.315 [2024-11-27 00:47:19.878460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:43.315 [2024-11-27 00:47:19.878468] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f54237f-b41a-4fe3-9f71-78ff667aded7 00:24:43.315 [2024-11-27 00:47:19.878478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:43.315 [2024-11-27 00:47:19.878499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25280 00:24:43.315 [2024-11-27 00:47:19.878507] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24320 00:24:43.315 [2024-11-27 00:47:19.878517] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0395 00:24:43.315 [2024-11-27 00:47:19.878525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:43.315 [2024-11-27 00:47:19.878533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:43.315 [2024-11-27 00:47:19.878541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:43.315 [2024-11-27 00:47:19.878549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:43.315 [2024-11-27 00:47:19.878562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:43.315 [2024-11-27 00:47:19.878570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.315 [2024-11-27 00:47:19.878585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:43.315 [2024-11-27 00:47:19.878593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:24:43.315 [2024-11-27 00:47:19.878601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.880913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.315 [2024-11-27 00:47:19.880946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:43.315 [2024-11-27 00:47:19.880957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:24:43.315 [2024-11-27 00:47:19.880966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.881081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.315 [2024-11-27 00:47:19.881100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:43.315 [2024-11-27 00:47:19.881110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:43.315 [2024-11-27 00:47:19.881121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.888644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.315 [2024-11-27 00:47:19.888703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.315 [2024-11-27 00:47:19.888719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.315 [2024-11-27 00:47:19.888729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.888792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.315 [2024-11-27 00:47:19.888801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.315 [2024-11-27 00:47:19.888810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.315 [2024-11-27 00:47:19.888822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.888916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.315 [2024-11-27 00:47:19.888930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.315 [2024-11-27 00:47:19.888939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.315 [2024-11-27 00:47:19.888948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.888968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.315 [2024-11-27 00:47:19.888977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.315 [2024-11-27 00:47:19.888985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.315 [2024-11-27 00:47:19.888992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.315 [2024-11-27 00:47:19.902770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.315 [2024-11-27 00:47:19.902828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.315 [2024-11-27 00:47:19.902840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.902849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.912938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.912992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.316 [2024-11-27 00:47:19.913005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.316 [2024-11-27 00:47:19.913111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.316 [2024-11-27 00:47:19.913173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.316 [2024-11-27 00:47:19.913277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:43.316 [2024-11-27 00:47:19.913332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.316 [2024-11-27 00:47:19.913405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:43.316 [2024-11-27 00:47:19.913468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.316 [2024-11-27 00:47:19.913476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:43.316 [2024-11-27 00:47:19.913484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.316 [2024-11-27 00:47:19.913616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 312.134 ms, result 0 00:24:43.577 00:24:43.577 00:24:43.577 00:47:20 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:45.493 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:45.493 00:47:22 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:45.493 00:47:22 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:45.493 00:47:22 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88539 00:24:45.754 00:47:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88539 ']' 00:24:45.754 Process with pid 88539 is not found 00:24:45.754 Remove shared memory files 00:24:45.754 00:47:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88539 00:24:45.754 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88539) - No such process 00:24:45.754 00:47:22 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88539 is not found' 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:45.754 00:47:22 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:45.754 ************************************ 00:24:45.754 END TEST ftl_restore 00:24:45.754 ************************************ 00:24:45.754 00:24:45.754 real 4m18.335s 00:24:45.754 user 4m6.297s 00:24:45.754 sys 0m12.303s 00:24:45.754 00:47:22 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.754 00:47:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:45.754 00:47:22 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:45.754 00:47:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:45.755 00:47:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.755 00:47:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:45.755 ************************************ 00:24:45.755 START TEST ftl_dirty_shutdown 00:24:45.755 ************************************ 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:45.755 * Looking for test storage... 00:24:45.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.755 --rc genhtml_branch_coverage=1 00:24:45.755 --rc genhtml_function_coverage=1 00:24:45.755 --rc genhtml_legend=1 00:24:45.755 --rc geninfo_all_blocks=1 00:24:45.755 --rc geninfo_unexecuted_blocks=1 00:24:45.755 00:24:45.755 ' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.755 --rc genhtml_branch_coverage=1 00:24:45.755 --rc genhtml_function_coverage=1 00:24:45.755 --rc genhtml_legend=1 00:24:45.755 --rc geninfo_all_blocks=1 00:24:45.755 --rc geninfo_unexecuted_blocks=1 00:24:45.755 00:24:45.755 ' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.755 --rc genhtml_branch_coverage=1 00:24:45.755 --rc genhtml_function_coverage=1 00:24:45.755 --rc genhtml_legend=1 00:24:45.755 --rc geninfo_all_blocks=1 00:24:45.755 --rc geninfo_unexecuted_blocks=1 00:24:45.755 00:24:45.755 ' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.755 --rc genhtml_branch_coverage=1 00:24:45.755 --rc genhtml_function_coverage=1 00:24:45.755 --rc genhtml_legend=1 00:24:45.755 --rc geninfo_all_blocks=1 00:24:45.755 --rc geninfo_unexecuted_blocks=1 00:24:45.755 00:24:45.755 ' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91267 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91267 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91267 ']' 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.755 00:47:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:46.016 [2024-11-27 00:47:22.595448] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:24:46.016 [2024-11-27 00:47:22.595560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91267 ] 00:24:46.016 [2024-11-27 00:47:22.751047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.016 [2024-11-27 00:47:22.770523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:46.959 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.220 { 00:24:47.220 "name": "nvme0n1", 00:24:47.220 "aliases": [ 00:24:47.220 "70996370-5d99-411e-998d-e4afd9668bf1" 00:24:47.220 ], 00:24:47.220 "product_name": "NVMe disk", 00:24:47.220 "block_size": 4096, 00:24:47.220 "num_blocks": 1310720, 00:24:47.220 "uuid": "70996370-5d99-411e-998d-e4afd9668bf1", 00:24:47.220 "numa_id": -1, 00:24:47.220 "assigned_rate_limits": { 00:24:47.220 "rw_ios_per_sec": 0, 00:24:47.220 "rw_mbytes_per_sec": 0, 00:24:47.220 "r_mbytes_per_sec": 0, 00:24:47.220 "w_mbytes_per_sec": 0 00:24:47.220 }, 00:24:47.220 "claimed": true, 00:24:47.220 "claim_type": "read_many_write_one", 00:24:47.220 "zoned": false, 00:24:47.220 "supported_io_types": { 00:24:47.220 "read": true, 00:24:47.220 "write": true, 00:24:47.220 "unmap": true, 00:24:47.220 "flush": true, 00:24:47.220 "reset": true, 00:24:47.220 "nvme_admin": true, 00:24:47.220 "nvme_io": true, 00:24:47.220 "nvme_io_md": false, 00:24:47.220 "write_zeroes": true, 00:24:47.220 "zcopy": false, 00:24:47.220 "get_zone_info": false, 00:24:47.220 "zone_management": false, 00:24:47.220 "zone_append": false, 00:24:47.220 "compare": true, 00:24:47.220 "compare_and_write": false, 00:24:47.220 "abort": true, 00:24:47.220 "seek_hole": false, 00:24:47.220 "seek_data": false, 00:24:47.220 "copy": true, 00:24:47.220 "nvme_iov_md": false 00:24:47.220 }, 00:24:47.220 "driver_specific": { 00:24:47.220 "nvme": [ 00:24:47.220 { 00:24:47.220 "pci_address": "0000:00:11.0", 00:24:47.220 "trid": { 00:24:47.220 "trtype": "PCIe", 00:24:47.220 "traddr": "0000:00:11.0" 00:24:47.220 }, 00:24:47.220 "ctrlr_data": { 00:24:47.220 "cntlid": 0, 00:24:47.220 "vendor_id": "0x1b36", 00:24:47.220 "model_number": "QEMU NVMe Ctrl", 00:24:47.220 "serial_number": "12341", 00:24:47.220 "firmware_revision": "8.0.0", 00:24:47.220 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:47.220 "oacs": { 00:24:47.220 "security": 0, 00:24:47.220 "format": 1, 00:24:47.220 "firmware": 0, 00:24:47.220 "ns_manage": 1 00:24:47.220 }, 00:24:47.220 "multi_ctrlr": false, 00:24:47.220 "ana_reporting": false 00:24:47.220 }, 00:24:47.220 "vs": { 00:24:47.220 "nvme_version": "1.4" 00:24:47.220 }, 00:24:47.220 "ns_data": { 00:24:47.220 "id": 1, 00:24:47.220 "can_share": false 00:24:47.220 } 00:24:47.220 } 00:24:47.220 ], 00:24:47.220 "mp_policy": "active_passive" 00:24:47.220 } 00:24:47.220 } 00:24:47.220 ]' 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.220 00:47:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.220 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5730e268-fd55-41b2-acf6-6f97aa1104f8 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:47.483 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5730e268-fd55-41b2-acf6-6f97aa1104f8 00:24:47.745 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:48.007 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=c8c61ace-5ca4-4057-8bce-d701a6a6128c 00:24:48.007 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c8c61ace-5ca4-4057-8bce-d701a6a6128c 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:48.269 00:47:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.531 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:48.531 { 00:24:48.531 "name": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:48.531 "aliases": [ 00:24:48.531 "lvs/nvme0n1p0" 00:24:48.531 ], 00:24:48.531 "product_name": "Logical Volume", 00:24:48.531 "block_size": 4096, 00:24:48.531 "num_blocks": 26476544, 00:24:48.531 "uuid": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:48.531 "assigned_rate_limits": { 00:24:48.531 "rw_ios_per_sec": 0, 00:24:48.531 "rw_mbytes_per_sec": 0, 00:24:48.531 "r_mbytes_per_sec": 0, 00:24:48.531 "w_mbytes_per_sec": 0 00:24:48.531 }, 00:24:48.531 "claimed": false, 00:24:48.531 "zoned": false, 00:24:48.531 "supported_io_types": { 00:24:48.531 "read": true, 00:24:48.531 "write": true, 00:24:48.531 "unmap": true, 00:24:48.531 "flush": false, 00:24:48.531 "reset": true, 00:24:48.531 "nvme_admin": false, 00:24:48.531 "nvme_io": false, 00:24:48.531 "nvme_io_md": false, 00:24:48.531 "write_zeroes": true, 00:24:48.531 "zcopy": false, 00:24:48.531 "get_zone_info": false, 00:24:48.531 "zone_management": false, 00:24:48.531 "zone_append": false, 00:24:48.531 "compare": false, 00:24:48.531 "compare_and_write": false, 00:24:48.531 "abort": false, 00:24:48.531 "seek_hole": true, 00:24:48.531 "seek_data": true, 00:24:48.531 "copy": false, 00:24:48.531 "nvme_iov_md": false 00:24:48.531 }, 00:24:48.531 "driver_specific": { 00:24:48.531 "lvol": { 00:24:48.531 "lvol_store_uuid": "c8c61ace-5ca4-4057-8bce-d701a6a6128c", 00:24:48.532 "base_bdev": "nvme0n1", 00:24:48.532 "thin_provision": true, 00:24:48.532 "num_allocated_clusters": 0, 00:24:48.532 "snapshot": false, 00:24:48.532 "clone": false, 00:24:48.532 "esnap_clone": false 00:24:48.532 } 00:24:48.532 } 00:24:48.532 } 00:24:48.532 ]' 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:48.532 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:48.794 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:49.055 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.055 { 00:24:49.055 "name": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:49.055 "aliases": [ 00:24:49.055 "lvs/nvme0n1p0" 00:24:49.055 ], 00:24:49.055 "product_name": "Logical Volume", 00:24:49.055 "block_size": 4096, 00:24:49.055 "num_blocks": 26476544, 00:24:49.055 "uuid": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:49.055 "assigned_rate_limits": { 00:24:49.055 "rw_ios_per_sec": 0, 00:24:49.055 "rw_mbytes_per_sec": 0, 00:24:49.055 "r_mbytes_per_sec": 0, 00:24:49.055 "w_mbytes_per_sec": 0 00:24:49.055 }, 00:24:49.055 "claimed": false, 00:24:49.055 "zoned": false, 00:24:49.055 "supported_io_types": { 00:24:49.055 "read": true, 00:24:49.056 "write": true, 00:24:49.056 "unmap": true, 00:24:49.056 "flush": false, 00:24:49.056 "reset": true, 00:24:49.056 "nvme_admin": false, 00:24:49.056 "nvme_io": false, 00:24:49.056 "nvme_io_md": false, 00:24:49.056 "write_zeroes": true, 00:24:49.056 "zcopy": false, 00:24:49.056 "get_zone_info": false, 00:24:49.056 "zone_management": false, 00:24:49.056 "zone_append": false, 00:24:49.056 "compare": false, 00:24:49.056 "compare_and_write": false, 00:24:49.056 "abort": false, 00:24:49.056 "seek_hole": true, 00:24:49.056 "seek_data": true, 00:24:49.056 "copy": false, 00:24:49.056 "nvme_iov_md": false 00:24:49.056 }, 00:24:49.056 "driver_specific": { 00:24:49.056 "lvol": { 00:24:49.056 "lvol_store_uuid": "c8c61ace-5ca4-4057-8bce-d701a6a6128c", 00:24:49.056 "base_bdev": "nvme0n1", 00:24:49.056 "thin_provision": true, 00:24:49.056 "num_allocated_clusters": 0, 00:24:49.056 "snapshot": false, 00:24:49.056 "clone": false, 00:24:49.056 "esnap_clone": false 00:24:49.056 } 00:24:49.056 } 00:24:49.056 } 00:24:49.056 ]' 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:49.056 00:47:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:49.317 00:47:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dfc4740a-7aa0-4f96-8ab9-36fdd141a257 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:49.578 { 00:24:49.578 "name": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:49.578 "aliases": [ 00:24:49.578 "lvs/nvme0n1p0" 00:24:49.578 ], 00:24:49.578 "product_name": "Logical Volume", 00:24:49.578 "block_size": 4096, 00:24:49.578 "num_blocks": 26476544, 00:24:49.578 "uuid": "dfc4740a-7aa0-4f96-8ab9-36fdd141a257", 00:24:49.578 "assigned_rate_limits": { 00:24:49.578 "rw_ios_per_sec": 0, 00:24:49.578 "rw_mbytes_per_sec": 0, 00:24:49.578 "r_mbytes_per_sec": 0, 00:24:49.578 "w_mbytes_per_sec": 0 00:24:49.578 }, 00:24:49.578 "claimed": false, 00:24:49.578 "zoned": false, 00:24:49.578 "supported_io_types": { 00:24:49.578 "read": true, 00:24:49.578 "write": true, 00:24:49.578 "unmap": true, 00:24:49.578 "flush": false, 00:24:49.578 "reset": true, 00:24:49.578 "nvme_admin": false, 00:24:49.578 "nvme_io": false, 00:24:49.578 "nvme_io_md": false, 00:24:49.578 "write_zeroes": true, 00:24:49.578 "zcopy": false, 00:24:49.578 "get_zone_info": false, 00:24:49.578 "zone_management": false, 00:24:49.578 "zone_append": false, 00:24:49.578 "compare": false, 00:24:49.578 "compare_and_write": false, 00:24:49.578 "abort": false, 00:24:49.578 "seek_hole": true, 00:24:49.578 "seek_data": true, 00:24:49.578 "copy": false, 00:24:49.578 "nvme_iov_md": false 00:24:49.578 }, 00:24:49.578 "driver_specific": { 00:24:49.578 "lvol": { 00:24:49.578 "lvol_store_uuid": "c8c61ace-5ca4-4057-8bce-d701a6a6128c", 00:24:49.578 "base_bdev": "nvme0n1", 00:24:49.578 "thin_provision": true, 00:24:49.578 "num_allocated_clusters": 0, 00:24:49.578 "snapshot": false, 00:24:49.578 "clone": false, 00:24:49.578 "esnap_clone": false 00:24:49.578 } 00:24:49.578 } 00:24:49.578 } 00:24:49.578 ]' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d dfc4740a-7aa0-4f96-8ab9-36fdd141a257 --l2p_dram_limit 10' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:49.578 00:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dfc4740a-7aa0-4f96-8ab9-36fdd141a257 --l2p_dram_limit 10 -c nvc0n1p0 00:24:49.841 [2024-11-27 00:47:26.391876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.391916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.841 [2024-11-27 00:47:26.391929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:49.841 [2024-11-27 00:47:26.391937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.391978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.391988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.841 [2024-11-27 00:47:26.391995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:49.841 [2024-11-27 00:47:26.392006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.392025] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.841 [2024-11-27 00:47:26.392307] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.841 [2024-11-27 00:47:26.392335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.392344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.841 [2024-11-27 00:47:26.392351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:49.841 [2024-11-27 00:47:26.392362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.392386] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a9010db2-f580-4033-9b44-da706c853bb8 00:24:49.841 [2024-11-27 00:47:26.393341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.393365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:49.841 [2024-11-27 00:47:26.393375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:49.841 [2024-11-27 00:47:26.393382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.398017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.398048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.841 [2024-11-27 00:47:26.398057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.574 ms 00:24:49.841 [2024-11-27 00:47:26.398063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.398123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.398130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.841 [2024-11-27 00:47:26.398138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:49.841 [2024-11-27 00:47:26.398146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.398186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.398198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.841 [2024-11-27 00:47:26.398205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.841 [2024-11-27 00:47:26.398211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.398229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.841 [2024-11-27 00:47:26.399496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.399524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.841 [2024-11-27 00:47:26.399532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:24:49.841 [2024-11-27 00:47:26.399539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.399563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.399571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.841 [2024-11-27 00:47:26.399577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:49.841 [2024-11-27 00:47:26.399586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.399604] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:49.841 [2024-11-27 00:47:26.399717] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.841 [2024-11-27 00:47:26.399727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.841 [2024-11-27 00:47:26.399736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.841 [2024-11-27 00:47:26.399746] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.841 [2024-11-27 00:47:26.399755] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.841 [2024-11-27 00:47:26.399762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.841 [2024-11-27 00:47:26.399771] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.841 [2024-11-27 00:47:26.399777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.841 [2024-11-27 00:47:26.399784] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.841 [2024-11-27 00:47:26.399790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.399799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.841 [2024-11-27 00:47:26.399805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:24:49.841 [2024-11-27 00:47:26.399814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.399895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.841 [2024-11-27 00:47:26.399906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.841 [2024-11-27 00:47:26.399912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:49.841 [2024-11-27 00:47:26.399920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.841 [2024-11-27 00:47:26.399995] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.841 [2024-11-27 00:47:26.400004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.841 [2024-11-27 00:47:26.400010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.841 [2024-11-27 00:47:26.400018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.841 [2024-11-27 00:47:26.400030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.841 [2024-11-27 00:47:26.400042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.841 [2024-11-27 00:47:26.400047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.841 [2024-11-27 00:47:26.400060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.841 [2024-11-27 00:47:26.400066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.841 [2024-11-27 00:47:26.400071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.841 [2024-11-27 00:47:26.400079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.841 [2024-11-27 00:47:26.400086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.841 [2024-11-27 00:47:26.400092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.841 [2024-11-27 00:47:26.400104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.841 [2024-11-27 00:47:26.400109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.841 [2024-11-27 00:47:26.400121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.841 [2024-11-27 00:47:26.400133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.841 [2024-11-27 00:47:26.400139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.841 [2024-11-27 00:47:26.400144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.841 [2024-11-27 00:47:26.400151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.842 [2024-11-27 00:47:26.400157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.842 [2024-11-27 00:47:26.400170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.842 [2024-11-27 00:47:26.400179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.842 [2024-11-27 00:47:26.400191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.842 [2024-11-27 00:47:26.400197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.842 [2024-11-27 00:47:26.400210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.842 [2024-11-27 00:47:26.400219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.842 [2024-11-27 00:47:26.400225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.842 [2024-11-27 00:47:26.400232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.842 [2024-11-27 00:47:26.400237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.842 [2024-11-27 00:47:26.400244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.842 [2024-11-27 00:47:26.400258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.842 [2024-11-27 00:47:26.400264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400271] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.842 [2024-11-27 00:47:26.400278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.842 [2024-11-27 00:47:26.400291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.842 [2024-11-27 00:47:26.400299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.842 [2024-11-27 00:47:26.400308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.842 [2024-11-27 00:47:26.400314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.842 [2024-11-27 00:47:26.400321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.842 [2024-11-27 00:47:26.400327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.842 [2024-11-27 00:47:26.400334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.842 [2024-11-27 00:47:26.400341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.842 [2024-11-27 00:47:26.400351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.842 [2024-11-27 00:47:26.400360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.842 [2024-11-27 00:47:26.400374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.842 [2024-11-27 00:47:26.400383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.842 [2024-11-27 00:47:26.400389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.842 [2024-11-27 00:47:26.400397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.842 [2024-11-27 00:47:26.400404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.842 [2024-11-27 00:47:26.400413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.842 [2024-11-27 00:47:26.400418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.842 [2024-11-27 00:47:26.400426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.842 [2024-11-27 00:47:26.400433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.842 [2024-11-27 00:47:26.400468] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.842 [2024-11-27 00:47:26.400475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.842 [2024-11-27 00:47:26.400491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.842 [2024-11-27 00:47:26.400498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.842 [2024-11-27 00:47:26.400504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.842 [2024-11-27 00:47:26.400512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.842 [2024-11-27 00:47:26.400518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.842 [2024-11-27 00:47:26.400531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:24:49.842 [2024-11-27 00:47:26.400536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.842 [2024-11-27 00:47:26.400566] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:49.842 [2024-11-27 00:47:26.400574] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:54.050 [2024-11-27 00:47:30.036693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.050 [2024-11-27 00:47:30.036784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:54.050 [2024-11-27 00:47:30.036806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3636.104 ms 00:24:54.050 [2024-11-27 00:47:30.036816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.050 [2024-11-27 00:47:30.052334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.050 [2024-11-27 00:47:30.052403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.050 [2024-11-27 00:47:30.052422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.370 ms 00:24:54.050 [2024-11-27 00:47:30.052436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.050 [2024-11-27 00:47:30.052588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.050 [2024-11-27 00:47:30.052602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.050 [2024-11-27 00:47:30.052614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:54.050 [2024-11-27 00:47:30.052622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.050 [2024-11-27 00:47:30.066000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.050 [2024-11-27 00:47:30.066056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.050 [2024-11-27 00:47:30.066071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.335 ms 00:24:54.050 [2024-11-27 00:47:30.066083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.066122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.066132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.051 [2024-11-27 00:47:30.066143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:54.051 [2024-11-27 00:47:30.066151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.066751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.066795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.051 [2024-11-27 00:47:30.066811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:24:54.051 [2024-11-27 00:47:30.066821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.066967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.066981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.051 [2024-11-27 00:47:30.066995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:54.051 [2024-11-27 00:47:30.067006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.075504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.075553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.051 [2024-11-27 00:47:30.075567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.470 ms 00:24:54.051 [2024-11-27 00:47:30.075575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.102590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.051 [2024-11-27 00:47:30.106410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.106465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.051 [2024-11-27 00:47:30.106480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.760 ms 00:24:54.051 [2024-11-27 00:47:30.106493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.191833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.191931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:54.051 [2024-11-27 00:47:30.191947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.289 ms 00:24:54.051 [2024-11-27 00:47:30.192164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.192515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.192541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.051 [2024-11-27 00:47:30.192554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:24:54.051 [2024-11-27 00:47:30.192566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.197898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.197957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:54.051 [2024-11-27 00:47:30.197972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.290 ms 00:24:54.051 [2024-11-27 00:47:30.197984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.203105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.203164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:54.051 [2024-11-27 00:47:30.203176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.071 ms 00:24:54.051 [2024-11-27 00:47:30.203186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.203530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.203552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.051 [2024-11-27 00:47:30.203562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:24:54.051 [2024-11-27 00:47:30.203575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.251597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.251661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:54.051 [2024-11-27 00:47:30.251676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.998 ms 00:24:54.051 [2024-11-27 00:47:30.251687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.258557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.258615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:54.051 [2024-11-27 00:47:30.258628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:24:54.051 [2024-11-27 00:47:30.258639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.264525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.264582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:54.051 [2024-11-27 00:47:30.264592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:24:54.051 [2024-11-27 00:47:30.264604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.271015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.271073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.051 [2024-11-27 00:47:30.271084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:24:54.051 [2024-11-27 00:47:30.271098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.271152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.271173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.051 [2024-11-27 00:47:30.271183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:54.051 [2024-11-27 00:47:30.271194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.271268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.051 [2024-11-27 00:47:30.271287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.051 [2024-11-27 00:47:30.271296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:54.051 [2024-11-27 00:47:30.271309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.051 [2024-11-27 00:47:30.273203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3880.835 ms, result 0 00:24:54.051 { 00:24:54.051 "name": "ftl0", 00:24:54.051 "uuid": "a9010db2-f580-4033-9b44-da706c853bb8" 00:24:54.051 } 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:54.051 /dev/nbd0 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:54.051 1+0 records in 00:24:54.051 1+0 records out 00:24:54.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613248 s, 6.7 MB/s 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:54.051 00:47:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:54.312 [2024-11-27 00:47:30.837595] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:24:54.312 [2024-11-27 00:47:30.837748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91411 ] 00:24:54.312 [2024-11-27 00:47:31.000600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.312 [2024-11-27 00:47:31.029488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.699  [2024-11-27T00:47:33.430Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-27T00:47:34.374Z] Copying: 370/1024 [MB] (186 MBps) [2024-11-27T00:47:35.315Z] Copying: 557/1024 [MB] (186 MBps) [2024-11-27T00:47:36.250Z] Copying: 750/1024 [MB] (193 MBps) [2024-11-27T00:47:36.250Z] Copying: 1003/1024 [MB] (252 MBps) [2024-11-27T00:47:36.510Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:24:59.723 00:24:59.723 00:47:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:02.270 00:47:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:02.270 [2024-11-27 00:47:38.601765] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:25:02.270 [2024-11-27 00:47:38.601896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91495 ] 00:25:02.270 [2024-11-27 00:47:38.758351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.270 [2024-11-27 00:47:38.782909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.214  [2024-11-27T00:47:40.947Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T00:47:41.889Z] Copying: 25/1024 [MB] (10 MBps) [2024-11-27T00:47:43.264Z] Copying: 35032/1048576 [kB] (9232 kBps) [2024-11-27T00:47:44.199Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-27T00:47:45.131Z] Copying: 66/1024 [MB] (20 MBps) [2024-11-27T00:47:46.065Z] Copying: 89/1024 [MB] (23 MBps) [2024-11-27T00:47:47.030Z] Copying: 106/1024 [MB] (17 MBps) [2024-11-27T00:47:47.959Z] Copying: 127/1024 [MB] (20 MBps) [2024-11-27T00:47:48.890Z] Copying: 147/1024 [MB] (19 MBps) [2024-11-27T00:47:50.273Z] Copying: 166/1024 [MB] (19 MBps) [2024-11-27T00:47:51.207Z] Copying: 179/1024 [MB] (12 MBps) [2024-11-27T00:47:52.214Z] Copying: 198/1024 [MB] (19 MBps) [2024-11-27T00:47:53.150Z] Copying: 218/1024 [MB] (19 MBps) [2024-11-27T00:47:54.083Z] Copying: 238/1024 [MB] (20 MBps) [2024-11-27T00:47:55.017Z] Copying: 259/1024 [MB] (20 MBps) [2024-11-27T00:47:55.950Z] Copying: 278/1024 [MB] (19 MBps) [2024-11-27T00:47:56.884Z] Copying: 298/1024 [MB] (20 MBps) [2024-11-27T00:47:58.257Z] Copying: 321/1024 [MB] (23 MBps) [2024-11-27T00:47:59.191Z] Copying: 345/1024 [MB] (23 MBps) [2024-11-27T00:48:00.120Z] Copying: 367/1024 [MB] (22 MBps) [2024-11-27T00:48:01.055Z] Copying: 395/1024 [MB] (27 MBps) [2024-11-27T00:48:01.991Z] Copying: 422/1024 [MB] (27 MBps) [2024-11-27T00:48:02.926Z] Copying: 446/1024 [MB] (23 MBps) [2024-11-27T00:48:03.860Z] Copying: 469/1024 [MB] (22 MBps) [2024-11-27T00:48:05.232Z] Copying: 491/1024 [MB] (21 MBps) [2024-11-27T00:48:06.164Z] Copying: 512/1024 [MB] (21 MBps) [2024-11-27T00:48:07.097Z] Copying: 535/1024 [MB] (22 MBps) [2024-11-27T00:48:08.030Z] Copying: 557/1024 [MB] (22 MBps) [2024-11-27T00:48:08.965Z] Copying: 577/1024 [MB] (20 MBps) [2024-11-27T00:48:09.898Z] Copying: 607/1024 [MB] (29 MBps) [2024-11-27T00:48:11.287Z] Copying: 627/1024 [MB] (19 MBps) [2024-11-27T00:48:11.853Z] Copying: 649/1024 [MB] (21 MBps) [2024-11-27T00:48:13.228Z] Copying: 668/1024 [MB] (19 MBps) [2024-11-27T00:48:14.163Z] Copying: 688/1024 [MB] (20 MBps) [2024-11-27T00:48:15.098Z] Copying: 708/1024 [MB] (20 MBps) [2024-11-27T00:48:16.033Z] Copying: 732/1024 [MB] (23 MBps) [2024-11-27T00:48:16.967Z] Copying: 752/1024 [MB] (19 MBps) [2024-11-27T00:48:17.900Z] Copying: 771/1024 [MB] (19 MBps) [2024-11-27T00:48:19.273Z] Copying: 795/1024 [MB] (24 MBps) [2024-11-27T00:48:20.208Z] Copying: 816/1024 [MB] (20 MBps) [2024-11-27T00:48:21.141Z] Copying: 836/1024 [MB] (20 MBps) [2024-11-27T00:48:22.075Z] Copying: 856/1024 [MB] (19 MBps) [2024-11-27T00:48:23.008Z] Copying: 878/1024 [MB] (22 MBps) [2024-11-27T00:48:24.005Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-27T00:48:24.940Z] Copying: 924/1024 [MB] (23 MBps) [2024-11-27T00:48:25.875Z] Copying: 947/1024 [MB] (22 MBps) [2024-11-27T00:48:27.250Z] Copying: 973/1024 [MB] (26 MBps) [2024-11-27T00:48:28.187Z] Copying: 995/1024 [MB] (21 MBps) [2024-11-27T00:48:28.187Z] Copying: 1017/1024 [MB] (22 MBps) [2024-11-27T00:48:28.447Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:25:51.660 00:25:51.660 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:51.660 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:51.922 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:51.922 [2024-11-27 00:48:28.675143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.675189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:51.922 [2024-11-27 00:48:28.675204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:51.922 [2024-11-27 00:48:28.675213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.922 [2024-11-27 00:48:28.675237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:51.922 [2024-11-27 00:48:28.675683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.675713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:51.922 [2024-11-27 00:48:28.675722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:25:51.922 [2024-11-27 00:48:28.675733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.922 [2024-11-27 00:48:28.678373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.678410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:51.922 [2024-11-27 00:48:28.678420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:25:51.922 [2024-11-27 00:48:28.678429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.922 [2024-11-27 00:48:28.693837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.693884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:51.922 [2024-11-27 00:48:28.693894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.391 ms 00:25:51.922 [2024-11-27 00:48:28.693907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.922 [2024-11-27 00:48:28.700073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.700106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:51.922 [2024-11-27 00:48:28.700116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:25:51.922 [2024-11-27 00:48:28.700126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.922 [2024-11-27 00:48:28.702073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.922 [2024-11-27 00:48:28.702111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:51.922 [2024-11-27 00:48:28.702120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:25:51.922 [2024-11-27 00:48:28.702130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.185 [2024-11-27 00:48:28.707343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.707383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:52.186 [2024-11-27 00:48:28.707392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.173 ms 00:25:52.186 [2024-11-27 00:48:28.707403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.707521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.707532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:52.186 [2024-11-27 00:48:28.707541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:52.186 [2024-11-27 00:48:28.707551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.710213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.710263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:52.186 [2024-11-27 00:48:28.710275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:25:52.186 [2024-11-27 00:48:28.710284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.711774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.711812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:52.186 [2024-11-27 00:48:28.711822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:25:52.186 [2024-11-27 00:48:28.711830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.713073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.713109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:52.186 [2024-11-27 00:48:28.713118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:25:52.186 [2024-11-27 00:48:28.713126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.714326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.186 [2024-11-27 00:48:28.714361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:52.186 [2024-11-27 00:48:28.714370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:25:52.186 [2024-11-27 00:48:28.714379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.186 [2024-11-27 00:48:28.714410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:52.186 [2024-11-27 00:48:28.714430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.714992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.715001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.715009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:52.186 [2024-11-27 00:48:28.715025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:52.187 [2024-11-27 00:48:28.715329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:52.187 [2024-11-27 00:48:28.715337] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9010db2-f580-4033-9b44-da706c853bb8 00:25:52.187 [2024-11-27 00:48:28.715346] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:52.187 [2024-11-27 00:48:28.715353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:52.187 [2024-11-27 00:48:28.715363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:52.187 [2024-11-27 00:48:28.715372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:52.187 [2024-11-27 00:48:28.715381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:52.187 [2024-11-27 00:48:28.715389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:52.187 [2024-11-27 00:48:28.715398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:52.187 [2024-11-27 00:48:28.715404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:52.187 [2024-11-27 00:48:28.715412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:52.187 [2024-11-27 00:48:28.715420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.187 [2024-11-27 00:48:28.715429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:52.187 [2024-11-27 00:48:28.715439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:25:52.187 [2024-11-27 00:48:28.715448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.716945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.187 [2024-11-27 00:48:28.716976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:52.187 [2024-11-27 00:48:28.716985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:25:52.187 [2024-11-27 00:48:28.717000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.717079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.187 [2024-11-27 00:48:28.717092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:52.187 [2024-11-27 00:48:28.717101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:52.187 [2024-11-27 00:48:28.717110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.722476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.722517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.187 [2024-11-27 00:48:28.722527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.722536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.722586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.722598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.187 [2024-11-27 00:48:28.722611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.722620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.722669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.722682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.187 [2024-11-27 00:48:28.722691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.722700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.722716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.722726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.187 [2024-11-27 00:48:28.722735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.722744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.732205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.732250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.187 [2024-11-27 00:48:28.732260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.732269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.187 [2024-11-27 00:48:28.740181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.187 [2024-11-27 00:48:28.740283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.187 [2024-11-27 00:48:28.740342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.187 [2024-11-27 00:48:28.740434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.187 [2024-11-27 00:48:28.740491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.187 [2024-11-27 00:48:28.740561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.187 [2024-11-27 00:48:28.740616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.187 [2024-11-27 00:48:28.740641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.187 [2024-11-27 00:48:28.740649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.187 [2024-11-27 00:48:28.740661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.188 [2024-11-27 00:48:28.740791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.617 ms, result 0 00:25:52.188 true 00:25:52.188 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91267 00:25:52.188 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91267 00:25:52.188 00:48:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:52.188 [2024-11-27 00:48:28.832460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:25:52.188 [2024-11-27 00:48:28.832585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92019 ] 00:25:52.449 [2024-11-27 00:48:28.992080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.449 [2024-11-27 00:48:29.022219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.391  [2024-11-27T00:48:31.122Z] Copying: 208/1024 [MB] (208 MBps) [2024-11-27T00:48:32.509Z] Copying: 468/1024 [MB] (259 MBps) [2024-11-27T00:48:33.454Z] Copying: 727/1024 [MB] (259 MBps) [2024-11-27T00:48:33.454Z] Copying: 985/1024 [MB] (257 MBps) [2024-11-27T00:48:33.454Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:25:56.667 00:25:56.667 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91267 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:56.667 00:48:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.667 [2024-11-27 00:48:33.449137] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:25:56.667 [2024-11-27 00:48:33.449261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92082 ] 00:25:56.928 [2024-11-27 00:48:33.608186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.928 [2024-11-27 00:48:33.634126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.188 [2024-11-27 00:48:33.749417] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.188 [2024-11-27 00:48:33.749489] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.188 [2024-11-27 00:48:33.813828] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:57.188 [2024-11-27 00:48:33.814380] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:57.188 [2024-11-27 00:48:33.815073] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:57.449 [2024-11-27 00:48:34.231332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.449 [2024-11-27 00:48:34.231401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.449 [2024-11-27 00:48:34.231419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.449 [2024-11-27 00:48:34.231428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.449 [2024-11-27 00:48:34.231491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.449 [2024-11-27 00:48:34.231503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.449 [2024-11-27 00:48:34.231512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:57.449 [2024-11-27 00:48:34.231520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.449 [2024-11-27 00:48:34.231545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.449 [2024-11-27 00:48:34.231969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.449 [2024-11-27 00:48:34.232018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.449 [2024-11-27 00:48:34.232029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.449 [2024-11-27 00:48:34.232039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:25:57.449 [2024-11-27 00:48:34.232047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.449 [2024-11-27 00:48:34.233817] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.713 [2024-11-27 00:48:34.237770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.237828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.713 [2024-11-27 00:48:34.237840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:25:57.713 [2024-11-27 00:48:34.237875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.237957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.237969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.713 [2024-11-27 00:48:34.237980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:57.713 [2024-11-27 00:48:34.237990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.246077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.246109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.713 [2024-11-27 00:48:34.246120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.041 ms 00:25:57.713 [2024-11-27 00:48:34.246127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.246220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.246232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.713 [2024-11-27 00:48:34.246240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:57.713 [2024-11-27 00:48:34.246253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.246302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.246313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.713 [2024-11-27 00:48:34.246320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:57.713 [2024-11-27 00:48:34.246327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.246355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.713 [2024-11-27 00:48:34.247757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.247788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.713 [2024-11-27 00:48:34.247805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:25:57.713 [2024-11-27 00:48:34.247814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.247841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.247865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.713 [2024-11-27 00:48:34.247874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:57.713 [2024-11-27 00:48:34.247881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.247900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.713 [2024-11-27 00:48:34.247919] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.713 [2024-11-27 00:48:34.247956] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.713 [2024-11-27 00:48:34.247973] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.713 [2024-11-27 00:48:34.248076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.713 [2024-11-27 00:48:34.248090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.713 [2024-11-27 00:48:34.248100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.713 [2024-11-27 00:48:34.248114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248130] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.713 [2024-11-27 00:48:34.248138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.713 [2024-11-27 00:48:34.248145] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.713 [2024-11-27 00:48:34.248154] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.713 [2024-11-27 00:48:34.248161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.248168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.713 [2024-11-27 00:48:34.248176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:57.713 [2024-11-27 00:48:34.248185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.248266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.713 [2024-11-27 00:48:34.248275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.713 [2024-11-27 00:48:34.248283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:57.713 [2024-11-27 00:48:34.248295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.713 [2024-11-27 00:48:34.248398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.713 [2024-11-27 00:48:34.248417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.713 [2024-11-27 00:48:34.248426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.713 [2024-11-27 00:48:34.248452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.713 [2024-11-27 00:48:34.248476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.713 [2024-11-27 00:48:34.248491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.713 [2024-11-27 00:48:34.248500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.713 [2024-11-27 00:48:34.248507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.713 [2024-11-27 00:48:34.248515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.713 [2024-11-27 00:48:34.248522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.713 [2024-11-27 00:48:34.248530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.713 [2024-11-27 00:48:34.248554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.713 [2024-11-27 00:48:34.248579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.713 [2024-11-27 00:48:34.248601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.713 [2024-11-27 00:48:34.248623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.713 [2024-11-27 00:48:34.248646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.713 [2024-11-27 00:48:34.248653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.713 [2024-11-27 00:48:34.248660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.714 [2024-11-27 00:48:34.248670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.714 [2024-11-27 00:48:34.248677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.714 [2024-11-27 00:48:34.248684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.714 [2024-11-27 00:48:34.248692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.714 [2024-11-27 00:48:34.248700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.714 [2024-11-27 00:48:34.248707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.714 [2024-11-27 00:48:34.248714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.714 [2024-11-27 00:48:34.248721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.714 [2024-11-27 00:48:34.248728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.714 [2024-11-27 00:48:34.248736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.714 [2024-11-27 00:48:34.248744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.714 [2024-11-27 00:48:34.248751] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.714 [2024-11-27 00:48:34.248759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.714 [2024-11-27 00:48:34.248768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.714 [2024-11-27 00:48:34.248775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.714 [2024-11-27 00:48:34.248783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.714 [2024-11-27 00:48:34.248792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.714 [2024-11-27 00:48:34.248799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.714 [2024-11-27 00:48:34.248807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.714 [2024-11-27 00:48:34.248814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.714 [2024-11-27 00:48:34.248821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.714 [2024-11-27 00:48:34.248829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.714 [2024-11-27 00:48:34.248837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.714 [2024-11-27 00:48:34.248867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.714 [2024-11-27 00:48:34.248875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.714 [2024-11-27 00:48:34.248882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.714 [2024-11-27 00:48:34.248889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.714 [2024-11-27 00:48:34.248895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.714 [2024-11-27 00:48:34.248902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.714 [2024-11-27 00:48:34.248909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.714 [2024-11-27 00:48:34.248917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.714 [2024-11-27 00:48:34.248926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.714 [2024-11-27 00:48:34.248961] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.714 [2024-11-27 00:48:34.248971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.714 [2024-11-27 00:48:34.248987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.714 [2024-11-27 00:48:34.248994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.714 [2024-11-27 00:48:34.249001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.714 [2024-11-27 00:48:34.249008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.249015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.714 [2024-11-27 00:48:34.249022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:25:57.714 [2024-11-27 00:48:34.249029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.258347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.258385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.714 [2024-11-27 00:48:34.258394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.279 ms 00:25:57.714 [2024-11-27 00:48:34.258406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.258484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.258493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.714 [2024-11-27 00:48:34.258503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:57.714 [2024-11-27 00:48:34.258510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.276936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.277066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.714 [2024-11-27 00:48:34.277082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.374 ms 00:25:57.714 [2024-11-27 00:48:34.277091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.277136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.277146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.714 [2024-11-27 00:48:34.277158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.714 [2024-11-27 00:48:34.277165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.277581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.277619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.714 [2024-11-27 00:48:34.277635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:57.714 [2024-11-27 00:48:34.277653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.277811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.277827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.714 [2024-11-27 00:48:34.277836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:57.714 [2024-11-27 00:48:34.277847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.283985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.284018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.714 [2024-11-27 00:48:34.284029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.093 ms 00:25:57.714 [2024-11-27 00:48:34.284038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.286689] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:57.714 [2024-11-27 00:48:34.286727] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.714 [2024-11-27 00:48:34.286740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.286749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.714 [2024-11-27 00:48:34.286759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:25:57.714 [2024-11-27 00:48:34.286767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.302104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.302155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.714 [2024-11-27 00:48:34.302166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.291 ms 00:25:57.714 [2024-11-27 00:48:34.302175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.303990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.304020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.714 [2024-11-27 00:48:34.304029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:25:57.714 [2024-11-27 00:48:34.304037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.305662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.305693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.714 [2024-11-27 00:48:34.305702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:25:57.714 [2024-11-27 00:48:34.305708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.306050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.306061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.714 [2024-11-27 00:48:34.306075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:25:57.714 [2024-11-27 00:48:34.306085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.323570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.714 [2024-11-27 00:48:34.323612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.714 [2024-11-27 00:48:34.323624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.469 ms 00:25:57.714 [2024-11-27 00:48:34.323632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.714 [2024-11-27 00:48:34.331222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.715 [2024-11-27 00:48:34.333898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.333926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.715 [2024-11-27 00:48:34.333937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.226 ms 00:25:57.715 [2024-11-27 00:48:34.333946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.334021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.334035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.715 [2024-11-27 00:48:34.334047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:57.715 [2024-11-27 00:48:34.334054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.334124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.334149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.715 [2024-11-27 00:48:34.334163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:57.715 [2024-11-27 00:48:34.334171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.334195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.334203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.715 [2024-11-27 00:48:34.334210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.715 [2024-11-27 00:48:34.334220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.334259] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.715 [2024-11-27 00:48:34.334274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.334282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.715 [2024-11-27 00:48:34.334290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:57.715 [2024-11-27 00:48:34.334297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.337996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.338039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.715 [2024-11-27 00:48:34.338050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:25:57.715 [2024-11-27 00:48:34.338058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.338132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.715 [2024-11-27 00:48:34.338165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.715 [2024-11-27 00:48:34.338175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:57.715 [2024-11-27 00:48:34.338182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.715 [2024-11-27 00:48:34.339239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.423 ms, result 0 00:25:58.661  [2024-11-27T00:48:36.394Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-27T00:48:37.782Z] Copying: 37/1024 [MB] (20 MBps) [2024-11-27T00:48:38.354Z] Copying: 53/1024 [MB] (16 MBps) [2024-11-27T00:48:39.741Z] Copying: 70/1024 [MB] (16 MBps) [2024-11-27T00:48:40.686Z] Copying: 84/1024 [MB] (14 MBps) [2024-11-27T00:48:41.630Z] Copying: 108/1024 [MB] (23 MBps) [2024-11-27T00:48:42.578Z] Copying: 119/1024 [MB] (11 MBps) [2024-11-27T00:48:43.522Z] Copying: 136/1024 [MB] (16 MBps) [2024-11-27T00:48:44.468Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-27T00:48:45.414Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-27T00:48:46.358Z] Copying: 195/1024 [MB] (29 MBps) [2024-11-27T00:48:47.746Z] Copying: 219/1024 [MB] (24 MBps) [2024-11-27T00:48:48.691Z] Copying: 239/1024 [MB] (20 MBps) [2024-11-27T00:48:49.635Z] Copying: 256/1024 [MB] (17 MBps) [2024-11-27T00:48:50.580Z] Copying: 285/1024 [MB] (28 MBps) [2024-11-27T00:48:51.525Z] Copying: 313/1024 [MB] (27 MBps) [2024-11-27T00:48:52.468Z] Copying: 328/1024 [MB] (14 MBps) [2024-11-27T00:48:53.412Z] Copying: 358/1024 [MB] (29 MBps) [2024-11-27T00:48:54.406Z] Copying: 382/1024 [MB] (24 MBps) [2024-11-27T00:48:55.374Z] Copying: 403/1024 [MB] (21 MBps) [2024-11-27T00:48:56.759Z] Copying: 436/1024 [MB] (32 MBps) [2024-11-27T00:48:57.701Z] Copying: 455/1024 [MB] (19 MBps) [2024-11-27T00:48:58.645Z] Copying: 482/1024 [MB] (26 MBps) [2024-11-27T00:48:59.590Z] Copying: 515/1024 [MB] (32 MBps) [2024-11-27T00:49:00.535Z] Copying: 547/1024 [MB] (32 MBps) [2024-11-27T00:49:01.479Z] Copying: 571/1024 [MB] (24 MBps) [2024-11-27T00:49:02.425Z] Copying: 598/1024 [MB] (26 MBps) [2024-11-27T00:49:03.369Z] Copying: 616/1024 [MB] (18 MBps) [2024-11-27T00:49:04.758Z] Copying: 636/1024 [MB] (19 MBps) [2024-11-27T00:49:05.703Z] Copying: 651/1024 [MB] (15 MBps) [2024-11-27T00:49:06.649Z] Copying: 666/1024 [MB] (14 MBps) [2024-11-27T00:49:07.593Z] Copying: 685/1024 [MB] (19 MBps) [2024-11-27T00:49:08.538Z] Copying: 718/1024 [MB] (32 MBps) [2024-11-27T00:49:09.481Z] Copying: 740/1024 [MB] (21 MBps) [2024-11-27T00:49:10.424Z] Copying: 759/1024 [MB] (19 MBps) [2024-11-27T00:49:11.369Z] Copying: 772/1024 [MB] (13 MBps) [2024-11-27T00:49:12.758Z] Copying: 789/1024 [MB] (17 MBps) [2024-11-27T00:49:13.704Z] Copying: 809/1024 [MB] (19 MBps) [2024-11-27T00:49:14.649Z] Copying: 826/1024 [MB] (16 MBps) [2024-11-27T00:49:15.592Z] Copying: 841/1024 [MB] (15 MBps) [2024-11-27T00:49:16.537Z] Copying: 859/1024 [MB] (17 MBps) [2024-11-27T00:49:17.478Z] Copying: 871/1024 [MB] (11 MBps) [2024-11-27T00:49:18.422Z] Copying: 902/1024 [MB] (30 MBps) [2024-11-27T00:49:19.411Z] Copying: 920/1024 [MB] (18 MBps) [2024-11-27T00:49:20.354Z] Copying: 936/1024 [MB] (15 MBps) [2024-11-27T00:49:21.741Z] Copying: 949/1024 [MB] (13 MBps) [2024-11-27T00:49:22.686Z] Copying: 978/1024 [MB] (28 MBps) [2024-11-27T00:49:23.629Z] Copying: 1001/1024 [MB] (23 MBps) [2024-11-27T00:49:24.572Z] Copying: 1019/1024 [MB] (17 MBps) [2024-11-27T00:49:24.572Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-27 00:49:24.268721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.268791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.785 [2024-11-27 00:49:24.268806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:47.785 [2024-11-27 00:49:24.268815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.268992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.785 [2024-11-27 00:49:24.269757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.269794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.785 [2024-11-27 00:49:24.269805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:26:47.785 [2024-11-27 00:49:24.269813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.280085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.280128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.785 [2024-11-27 00:49:24.280138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:26:47.785 [2024-11-27 00:49:24.280147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.305154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.305193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.785 [2024-11-27 00:49:24.305212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.986 ms 00:26:47.785 [2024-11-27 00:49:24.305219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.311340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.311374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:47.785 [2024-11-27 00:49:24.311384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.089 ms 00:26:47.785 [2024-11-27 00:49:24.311392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.313391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.313430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.785 [2024-11-27 00:49:24.313439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:26:47.785 [2024-11-27 00:49:24.313446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.785 [2024-11-27 00:49:24.317721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.785 [2024-11-27 00:49:24.317761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:47.785 [2024-11-27 00:49:24.317781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:26:47.785 [2024-11-27 00:49:24.317789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.610879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.048 [2024-11-27 00:49:24.610955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:48.048 [2024-11-27 00:49:24.610969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 293.051 ms 00:26:48.048 [2024-11-27 00:49:24.610978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.614726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.048 [2024-11-27 00:49:24.614788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:48.048 [2024-11-27 00:49:24.614799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.729 ms 00:26:48.048 [2024-11-27 00:49:24.614806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.617747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.048 [2024-11-27 00:49:24.617799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:48.048 [2024-11-27 00:49:24.617809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:26:48.048 [2024-11-27 00:49:24.617817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.620133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.048 [2024-11-27 00:49:24.620185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:48.048 [2024-11-27 00:49:24.620196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:26:48.048 [2024-11-27 00:49:24.620203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.622939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.048 [2024-11-27 00:49:24.622993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:48.048 [2024-11-27 00:49:24.623003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:26:48.048 [2024-11-27 00:49:24.623011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.048 [2024-11-27 00:49:24.623053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.048 [2024-11-27 00:49:24.623076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:26:48.048 [2024-11-27 00:49:24.623093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.048 [2024-11-27 00:49:24.623340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.049 [2024-11-27 00:49:24.623912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.049 [2024-11-27 00:49:24.623920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9010db2-f580-4033-9b44-da706c853bb8 00:26:48.049 [2024-11-27 00:49:24.623930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:26:48.049 [2024-11-27 00:49:24.623945] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:26:48.049 [2024-11-27 00:49:24.623954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:26:48.049 [2024-11-27 00:49:24.623970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:26:48.049 [2024-11-27 00:49:24.623977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.049 [2024-11-27 00:49:24.623986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.049 [2024-11-27 00:49:24.623993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.049 [2024-11-27 00:49:24.624001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.049 [2024-11-27 00:49:24.624008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.049 [2024-11-27 00:49:24.624016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.049 [2024-11-27 00:49:24.624025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.049 [2024-11-27 00:49:24.624036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:26:48.049 [2024-11-27 00:49:24.624044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.049 [2024-11-27 00:49:24.626423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.049 [2024-11-27 00:49:24.626466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:48.049 [2024-11-27 00:49:24.626478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:26:48.049 [2024-11-27 00:49:24.626495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.049 [2024-11-27 00:49:24.626627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.049 [2024-11-27 00:49:24.626638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:48.049 [2024-11-27 00:49:24.626647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:48.049 [2024-11-27 00:49:24.626655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.049 [2024-11-27 00:49:24.634506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.049 [2024-11-27 00:49:24.634559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:48.049 [2024-11-27 00:49:24.634570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.049 [2024-11-27 00:49:24.634578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.634640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.634651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:48.050 [2024-11-27 00:49:24.634659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.634666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.634721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.634732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:48.050 [2024-11-27 00:49:24.634741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.634749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.634765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.634775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.050 [2024-11-27 00:49:24.634783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.634791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.648502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.648556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.050 [2024-11-27 00:49:24.648567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.648576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.658569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.658622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.050 [2024-11-27 00:49:24.658634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.658656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.658725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.658735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.050 [2024-11-27 00:49:24.658744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.658752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.658791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.658801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.050 [2024-11-27 00:49:24.658815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.658823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.658906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.658917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.050 [2024-11-27 00:49:24.658926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.658933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.658960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.658969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:48.050 [2024-11-27 00:49:24.658982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.658992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.659030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.659039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.050 [2024-11-27 00:49:24.659047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.659056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.659101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.050 [2024-11-27 00:49:24.659112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.050 [2024-11-27 00:49:24.659123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.050 [2024-11-27 00:49:24.659131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.050 [2024-11-27 00:49:24.659255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.045 ms, result 0 00:26:48.994 00:26:48.994 00:26:48.994 00:49:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:50.992 00:49:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:50.992 [2024-11-27 00:49:27.714358] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:26:50.992 [2024-11-27 00:49:27.714472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92635 ] 00:26:51.254 [2024-11-27 00:49:27.873154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.254 [2024-11-27 00:49:27.902746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.254 [2024-11-27 00:49:28.012908] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.254 [2024-11-27 00:49:28.012986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:51.517 [2024-11-27 00:49:28.173153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.173213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:51.517 [2024-11-27 00:49:28.173233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:51.517 [2024-11-27 00:49:28.173242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.173307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.173319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:51.517 [2024-11-27 00:49:28.173328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:51.517 [2024-11-27 00:49:28.173335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.173364] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:51.517 [2024-11-27 00:49:28.173648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:51.517 [2024-11-27 00:49:28.173674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.173683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:51.517 [2024-11-27 00:49:28.173695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:26:51.517 [2024-11-27 00:49:28.173707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.175481] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:51.517 [2024-11-27 00:49:28.179451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.179502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:51.517 [2024-11-27 00:49:28.179528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:26:51.517 [2024-11-27 00:49:28.179539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.179617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.179633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:51.517 [2024-11-27 00:49:28.179642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:51.517 [2024-11-27 00:49:28.179650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.188063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.188103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:51.517 [2024-11-27 00:49:28.188117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.367 ms 00:26:51.517 [2024-11-27 00:49:28.188125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.517 [2024-11-27 00:49:28.188237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.517 [2024-11-27 00:49:28.188247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:51.517 [2024-11-27 00:49:28.188255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:51.518 [2024-11-27 00:49:28.188264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.188322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.518 [2024-11-27 00:49:28.188332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:51.518 [2024-11-27 00:49:28.188343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:51.518 [2024-11-27 00:49:28.188355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.188378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:51.518 [2024-11-27 00:49:28.190441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.518 [2024-11-27 00:49:28.190476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:51.518 [2024-11-27 00:49:28.190486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:26:51.518 [2024-11-27 00:49:28.190495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.190529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.518 [2024-11-27 00:49:28.190538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:51.518 [2024-11-27 00:49:28.190547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:51.518 [2024-11-27 00:49:28.190560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.190582] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:51.518 [2024-11-27 00:49:28.190602] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:51.518 [2024-11-27 00:49:28.190643] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:51.518 [2024-11-27 00:49:28.190667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:51.518 [2024-11-27 00:49:28.190779] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:51.518 [2024-11-27 00:49:28.190790] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:51.518 [2024-11-27 00:49:28.190809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:51.518 [2024-11-27 00:49:28.190819] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:51.518 [2024-11-27 00:49:28.190829] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:51.518 [2024-11-27 00:49:28.190838] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:51.518 [2024-11-27 00:49:28.190845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:51.518 [2024-11-27 00:49:28.190868] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:51.518 [2024-11-27 00:49:28.190876] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:51.518 [2024-11-27 00:49:28.190884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.518 [2024-11-27 00:49:28.190891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:51.518 [2024-11-27 00:49:28.190899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:26:51.518 [2024-11-27 00:49:28.190906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.190993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.518 [2024-11-27 00:49:28.191001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:51.518 [2024-11-27 00:49:28.191013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:51.518 [2024-11-27 00:49:28.191021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.518 [2024-11-27 00:49:28.191124] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:51.518 [2024-11-27 00:49:28.191135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:51.518 [2024-11-27 00:49:28.191144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:51.518 [2024-11-27 00:49:28.191179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:51.518 [2024-11-27 00:49:28.191204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.518 [2024-11-27 00:49:28.191220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:51.518 [2024-11-27 00:49:28.191228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:51.518 [2024-11-27 00:49:28.191235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:51.518 [2024-11-27 00:49:28.191244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:51.518 [2024-11-27 00:49:28.191257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:51.518 [2024-11-27 00:49:28.191264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:51.518 [2024-11-27 00:49:28.191280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:51.518 [2024-11-27 00:49:28.191303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:51.518 [2024-11-27 00:49:28.191327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:51.518 [2024-11-27 00:49:28.191350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:51.518 [2024-11-27 00:49:28.191374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:51.518 [2024-11-27 00:49:28.191400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.518 [2024-11-27 00:49:28.191416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:51.518 [2024-11-27 00:49:28.191424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:51.518 [2024-11-27 00:49:28.191431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:51.518 [2024-11-27 00:49:28.191441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:51.518 [2024-11-27 00:49:28.191450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:51.518 [2024-11-27 00:49:28.191457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:51.518 [2024-11-27 00:49:28.191473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:51.518 [2024-11-27 00:49:28.191480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191488] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:51.518 [2024-11-27 00:49:28.191499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:51.518 [2024-11-27 00:49:28.191507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:51.518 [2024-11-27 00:49:28.191516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:51.518 [2024-11-27 00:49:28.191524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:51.518 [2024-11-27 00:49:28.191531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:51.518 [2024-11-27 00:49:28.191537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:51.519 [2024-11-27 00:49:28.191545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:51.519 [2024-11-27 00:49:28.191552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:51.519 [2024-11-27 00:49:28.191559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:51.519 [2024-11-27 00:49:28.191567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:51.519 [2024-11-27 00:49:28.191576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:51.519 [2024-11-27 00:49:28.191591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:51.519 [2024-11-27 00:49:28.191598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:51.519 [2024-11-27 00:49:28.191605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:51.519 [2024-11-27 00:49:28.191612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:51.519 [2024-11-27 00:49:28.191619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:51.519 [2024-11-27 00:49:28.191626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:51.519 [2024-11-27 00:49:28.191636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:51.519 [2024-11-27 00:49:28.191643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:51.519 [2024-11-27 00:49:28.191651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:51.519 [2024-11-27 00:49:28.191686] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:51.519 [2024-11-27 00:49:28.191695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:51.519 [2024-11-27 00:49:28.191711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:51.519 [2024-11-27 00:49:28.191718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:51.519 [2024-11-27 00:49:28.191725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:51.519 [2024-11-27 00:49:28.191732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.191740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:51.519 [2024-11-27 00:49:28.191747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:26:51.519 [2024-11-27 00:49:28.191760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.206129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.206175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:51.519 [2024-11-27 00:49:28.206188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:26:51.519 [2024-11-27 00:49:28.206196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.206287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.206296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:51.519 [2024-11-27 00:49:28.206305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:51.519 [2024-11-27 00:49:28.206313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.224409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.224461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:51.519 [2024-11-27 00:49:28.224473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.041 ms 00:26:51.519 [2024-11-27 00:49:28.224482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.224524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.224534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:51.519 [2024-11-27 00:49:28.224543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:51.519 [2024-11-27 00:49:28.224556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.225066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.225137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:51.519 [2024-11-27 00:49:28.225155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:26:51.519 [2024-11-27 00:49:28.225165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.225305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.225322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:51.519 [2024-11-27 00:49:28.225331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:26:51.519 [2024-11-27 00:49:28.225340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.232132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.232167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:51.519 [2024-11-27 00:49:28.232178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.769 ms 00:26:51.519 [2024-11-27 00:49:28.232186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.235231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:51.519 [2024-11-27 00:49:28.235279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:51.519 [2024-11-27 00:49:28.235296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.235305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:51.519 [2024-11-27 00:49:28.235320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:26:51.519 [2024-11-27 00:49:28.235328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.250756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.250802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:51.519 [2024-11-27 00:49:28.250814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.378 ms 00:26:51.519 [2024-11-27 00:49:28.250824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.253564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.253604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:51.519 [2024-11-27 00:49:28.253613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:26:51.519 [2024-11-27 00:49:28.253620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.256091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.256131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:51.519 [2024-11-27 00:49:28.256141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:26:51.519 [2024-11-27 00:49:28.256149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.256484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.519 [2024-11-27 00:49:28.256500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:51.519 [2024-11-27 00:49:28.256512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:51.519 [2024-11-27 00:49:28.256520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.519 [2024-11-27 00:49:28.279993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.280039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:51.520 [2024-11-27 00:49:28.280051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.450 ms 00:26:51.520 [2024-11-27 00:49:28.280059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.287967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:51.520 [2024-11-27 00:49:28.290755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.290790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:51.520 [2024-11-27 00:49:28.290802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.651 ms 00:26:51.520 [2024-11-27 00:49:28.290811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.290895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.290907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:51.520 [2024-11-27 00:49:28.290920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:51.520 [2024-11-27 00:49:28.290928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.292525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.292563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:51.520 [2024-11-27 00:49:28.292574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:26:51.520 [2024-11-27 00:49:28.292582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.292606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.292620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:51.520 [2024-11-27 00:49:28.292629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:51.520 [2024-11-27 00:49:28.292637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.292671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:51.520 [2024-11-27 00:49:28.292682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.292691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:51.520 [2024-11-27 00:49:28.292703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:51.520 [2024-11-27 00:49:28.292713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.296945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.296986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:51.520 [2024-11-27 00:49:28.296997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:26:51.520 [2024-11-27 00:49:28.297005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.297083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:51.520 [2024-11-27 00:49:28.297093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:51.520 [2024-11-27 00:49:28.297102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:51.520 [2024-11-27 00:49:28.297113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:51.520 [2024-11-27 00:49:28.298151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 124.547 ms, result 0 00:26:52.911  [2024-11-27T00:49:30.642Z] Copying: 1028/1048576 [kB] (1028 kBps) [2024-11-27T00:49:31.589Z] Copying: 5544/1048576 [kB] (4516 kBps) [2024-11-27T00:49:32.534Z] Copying: 19/1024 [MB] (14 MBps) [2024-11-27T00:49:33.922Z] Copying: 37/1024 [MB] (17 MBps) [2024-11-27T00:49:34.493Z] Copying: 66/1024 [MB] (29 MBps) [2024-11-27T00:49:35.880Z] Copying: 108/1024 [MB] (41 MBps) [2024-11-27T00:49:36.824Z] Copying: 136/1024 [MB] (28 MBps) [2024-11-27T00:49:37.768Z] Copying: 166/1024 [MB] (29 MBps) [2024-11-27T00:49:38.712Z] Copying: 201/1024 [MB] (34 MBps) [2024-11-27T00:49:39.663Z] Copying: 231/1024 [MB] (30 MBps) [2024-11-27T00:49:40.609Z] Copying: 276/1024 [MB] (44 MBps) [2024-11-27T00:49:41.554Z] Copying: 312/1024 [MB] (36 MBps) [2024-11-27T00:49:42.501Z] Copying: 341/1024 [MB] (28 MBps) [2024-11-27T00:49:43.889Z] Copying: 362/1024 [MB] (20 MBps) [2024-11-27T00:49:44.832Z] Copying: 385/1024 [MB] (22 MBps) [2024-11-27T00:49:45.777Z] Copying: 415/1024 [MB] (30 MBps) [2024-11-27T00:49:46.722Z] Copying: 459/1024 [MB] (43 MBps) [2024-11-27T00:49:47.663Z] Copying: 485/1024 [MB] (25 MBps) [2024-11-27T00:49:48.609Z] Copying: 513/1024 [MB] (28 MBps) [2024-11-27T00:49:49.554Z] Copying: 545/1024 [MB] (31 MBps) [2024-11-27T00:49:50.498Z] Copying: 576/1024 [MB] (31 MBps) [2024-11-27T00:49:51.886Z] Copying: 611/1024 [MB] (34 MBps) [2024-11-27T00:49:52.832Z] Copying: 637/1024 [MB] (26 MBps) [2024-11-27T00:49:53.777Z] Copying: 671/1024 [MB] (34 MBps) [2024-11-27T00:49:54.724Z] Copying: 713/1024 [MB] (41 MBps) [2024-11-27T00:49:55.669Z] Copying: 734/1024 [MB] (21 MBps) [2024-11-27T00:49:56.615Z] Copying: 764/1024 [MB] (29 MBps) [2024-11-27T00:49:57.560Z] Copying: 808/1024 [MB] (44 MBps) [2024-11-27T00:49:58.571Z] Copying: 840/1024 [MB] (31 MBps) [2024-11-27T00:49:59.566Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-27T00:50:00.510Z] Copying: 881/1024 [MB] (23 MBps) [2024-11-27T00:50:01.896Z] Copying: 907/1024 [MB] (26 MBps) [2024-11-27T00:50:02.840Z] Copying: 934/1024 [MB] (26 MBps) [2024-11-27T00:50:03.786Z] Copying: 979/1024 [MB] (44 MBps) [2024-11-27T00:50:04.358Z] Copying: 998/1024 [MB] (19 MBps) [2024-11-27T00:50:06.275Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-27 00:50:05.853363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.853496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:29.488 [2024-11-27 00:50:05.853544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:29.488 [2024-11-27 00:50:05.853567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.853625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:29.488 [2024-11-27 00:50:05.854700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.854748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:29.488 [2024-11-27 00:50:05.854761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:27:29.488 [2024-11-27 00:50:05.854771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.855053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.855081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:29.488 [2024-11-27 00:50:05.855092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:27:29.488 [2024-11-27 00:50:05.855102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.868060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.868115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:29.488 [2024-11-27 00:50:05.868134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.937 ms 00:27:29.488 [2024-11-27 00:50:05.868143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.875004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.875070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:29.488 [2024-11-27 00:50:05.875084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:27:29.488 [2024-11-27 00:50:05.875093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.877827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.877895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:29.488 [2024-11-27 00:50:05.877907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:27:29.488 [2024-11-27 00:50:05.877915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.883200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.883265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:29.488 [2024-11-27 00:50:05.883276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.212 ms 00:27:29.488 [2024-11-27 00:50:05.883284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.888076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.888129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:29.488 [2024-11-27 00:50:05.888139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.764 ms 00:27:29.488 [2024-11-27 00:50:05.888160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.891583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.891640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:29.488 [2024-11-27 00:50:05.891651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.405 ms 00:27:29.488 [2024-11-27 00:50:05.891659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.894621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.894672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:29.488 [2024-11-27 00:50:05.894683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.919 ms 00:27:29.488 [2024-11-27 00:50:05.894691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.896731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.896777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:29.488 [2024-11-27 00:50:05.896787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:27:29.488 [2024-11-27 00:50:05.896795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.899184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.488 [2024-11-27 00:50:05.899236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:29.488 [2024-11-27 00:50:05.899245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:27:29.488 [2024-11-27 00:50:05.899252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.488 [2024-11-27 00:50:05.899292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:29.488 [2024-11-27 00:50:05.899308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:29.488 [2024-11-27 00:50:05.899319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:29.488 [2024-11-27 00:50:05.899329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:29.488 [2024-11-27 00:50:05.899427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.899991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:29.489 [2024-11-27 00:50:05.900275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:29.489 [2024-11-27 00:50:05.900301] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9010db2-f580-4033-9b44-da706c853bb8 00:27:29.489 [2024-11-27 00:50:05.900310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:29.489 [2024-11-27 00:50:05.900319] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158144 00:27:29.489 [2024-11-27 00:50:05.900327] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156160 00:27:29.490 [2024-11-27 00:50:05.900336] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:27:29.490 [2024-11-27 00:50:05.900354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:29.490 [2024-11-27 00:50:05.900368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:29.490 [2024-11-27 00:50:05.900376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:29.490 [2024-11-27 00:50:05.900383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:29.490 [2024-11-27 00:50:05.900397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:29.490 [2024-11-27 00:50:05.900405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.490 [2024-11-27 00:50:05.900415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:29.490 [2024-11-27 00:50:05.900424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:27:29.490 [2024-11-27 00:50:05.900433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.902819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.490 [2024-11-27 00:50:05.902876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:29.490 [2024-11-27 00:50:05.902887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:27:29.490 [2024-11-27 00:50:05.902895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.903021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.490 [2024-11-27 00:50:05.903039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:29.490 [2024-11-27 00:50:05.903049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:29.490 [2024-11-27 00:50:05.903056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.910473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.910527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:29.490 [2024-11-27 00:50:05.910537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.910545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.910603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.910618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:29.490 [2024-11-27 00:50:05.910626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.910634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.910709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.910721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:29.490 [2024-11-27 00:50:05.910729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.910736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.910752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.910760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:29.490 [2024-11-27 00:50:05.910771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.910779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.924304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.924364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:29.490 [2024-11-27 00:50:05.924376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.924384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.934584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:29.490 [2024-11-27 00:50:05.934606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.934615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.934675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:29.490 [2024-11-27 00:50:05.934684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.934692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.934738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:29.490 [2024-11-27 00:50:05.934746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.934754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.934848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:29.490 [2024-11-27 00:50:05.934873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.934881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.934920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:29.490 [2024-11-27 00:50:05.934932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.934943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.934998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.935013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:29.490 [2024-11-27 00:50:05.935026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.935034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.935088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.490 [2024-11-27 00:50:05.935099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:29.490 [2024-11-27 00:50:05.935111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.490 [2024-11-27 00:50:05.935119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.490 [2024-11-27 00:50:05.935269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.889 ms, result 0 00:27:29.490 00:27:29.490 00:27:29.490 00:50:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:32.039 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:32.039 00:50:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:32.039 [2024-11-27 00:50:08.296619] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:27:32.040 [2024-11-27 00:50:08.296718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93054 ] 00:27:32.040 [2024-11-27 00:50:08.449109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.040 [2024-11-27 00:50:08.472709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.040 [2024-11-27 00:50:08.586201] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.040 [2024-11-27 00:50:08.586295] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:32.040 [2024-11-27 00:50:08.747955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.748020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.040 [2024-11-27 00:50:08.748036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.040 [2024-11-27 00:50:08.748045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.748097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.748108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.040 [2024-11-27 00:50:08.748117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:32.040 [2024-11-27 00:50:08.748125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.748153] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.040 [2024-11-27 00:50:08.748492] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.040 [2024-11-27 00:50:08.748554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.748563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.040 [2024-11-27 00:50:08.748575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:27:32.040 [2024-11-27 00:50:08.748583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.750416] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:32.040 [2024-11-27 00:50:08.754377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.754430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:32.040 [2024-11-27 00:50:08.754449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:27:32.040 [2024-11-27 00:50:08.754461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.754533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.754543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:32.040 [2024-11-27 00:50:08.754552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:32.040 [2024-11-27 00:50:08.754560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.762631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.762680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.040 [2024-11-27 00:50:08.762699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.022 ms 00:27:32.040 [2024-11-27 00:50:08.762708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.762815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.762825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.040 [2024-11-27 00:50:08.762839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:32.040 [2024-11-27 00:50:08.762847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.762925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.762936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.040 [2024-11-27 00:50:08.762944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:32.040 [2024-11-27 00:50:08.762954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.762988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:32.040 [2024-11-27 00:50:08.765045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.765080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.040 [2024-11-27 00:50:08.765091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:27:32.040 [2024-11-27 00:50:08.765099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.765133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.765141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.040 [2024-11-27 00:50:08.765156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:32.040 [2024-11-27 00:50:08.765166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.765189] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:32.040 [2024-11-27 00:50:08.765209] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:32.040 [2024-11-27 00:50:08.765246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:32.040 [2024-11-27 00:50:08.765264] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:32.040 [2024-11-27 00:50:08.765370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.040 [2024-11-27 00:50:08.765380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.040 [2024-11-27 00:50:08.765395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:32.040 [2024-11-27 00:50:08.765410] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.040 [2024-11-27 00:50:08.765420] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.040 [2024-11-27 00:50:08.765429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:32.040 [2024-11-27 00:50:08.765436] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.040 [2024-11-27 00:50:08.765445] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.040 [2024-11-27 00:50:08.765456] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.040 [2024-11-27 00:50:08.765468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.765483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.040 [2024-11-27 00:50:08.765495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:32.040 [2024-11-27 00:50:08.765506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.765606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.040 [2024-11-27 00:50:08.765616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.040 [2024-11-27 00:50:08.765624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:32.040 [2024-11-27 00:50:08.765631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.040 [2024-11-27 00:50:08.765746] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.040 [2024-11-27 00:50:08.765761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.040 [2024-11-27 00:50:08.765770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.040 [2024-11-27 00:50:08.765790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.040 [2024-11-27 00:50:08.765798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.040 [2024-11-27 00:50:08.765806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.040 [2024-11-27 00:50:08.765814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:32.040 [2024-11-27 00:50:08.765821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.040 [2024-11-27 00:50:08.765829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:32.040 [2024-11-27 00:50:08.765838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.041 [2024-11-27 00:50:08.765847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.041 [2024-11-27 00:50:08.765880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:32.041 [2024-11-27 00:50:08.765888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.041 [2024-11-27 00:50:08.765899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.041 [2024-11-27 00:50:08.765907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:32.041 [2024-11-27 00:50:08.765936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.041 [2024-11-27 00:50:08.765944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.041 [2024-11-27 00:50:08.765951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:32.041 [2024-11-27 00:50:08.765958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.041 [2024-11-27 00:50:08.765965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.041 [2024-11-27 00:50:08.765972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:32.041 [2024-11-27 00:50:08.765979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.041 [2024-11-27 00:50:08.765986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.041 [2024-11-27 00:50:08.765993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.041 [2024-11-27 00:50:08.766013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.041 [2024-11-27 00:50:08.766020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.041 [2024-11-27 00:50:08.766033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.041 [2024-11-27 00:50:08.766040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.041 [2024-11-27 00:50:08.766054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.041 [2024-11-27 00:50:08.766060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.041 [2024-11-27 00:50:08.766073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.041 [2024-11-27 00:50:08.766080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:32.041 [2024-11-27 00:50:08.766087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.041 [2024-11-27 00:50:08.766094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.041 [2024-11-27 00:50:08.766101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:32.041 [2024-11-27 00:50:08.766108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.041 [2024-11-27 00:50:08.766126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:32.041 [2024-11-27 00:50:08.766132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766138] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.041 [2024-11-27 00:50:08.766149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.041 [2024-11-27 00:50:08.766160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.041 [2024-11-27 00:50:08.766168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.041 [2024-11-27 00:50:08.766181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.041 [2024-11-27 00:50:08.766192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.041 [2024-11-27 00:50:08.766203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.041 [2024-11-27 00:50:08.766213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.041 [2024-11-27 00:50:08.766224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.041 [2024-11-27 00:50:08.766237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.041 [2024-11-27 00:50:08.766251] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.041 [2024-11-27 00:50:08.766265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:32.041 [2024-11-27 00:50:08.766290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:32.041 [2024-11-27 00:50:08.766307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:32.041 [2024-11-27 00:50:08.766317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:32.041 [2024-11-27 00:50:08.766330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:32.041 [2024-11-27 00:50:08.766342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:32.041 [2024-11-27 00:50:08.766354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:32.041 [2024-11-27 00:50:08.766366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:32.041 [2024-11-27 00:50:08.766378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:32.041 [2024-11-27 00:50:08.766392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:32.041 [2024-11-27 00:50:08.766434] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.041 [2024-11-27 00:50:08.766443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.041 [2024-11-27 00:50:08.766465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.041 [2024-11-27 00:50:08.766475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.041 [2024-11-27 00:50:08.766483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.041 [2024-11-27 00:50:08.766492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.766501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.041 [2024-11-27 00:50:08.766511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:27:32.041 [2024-11-27 00:50:08.766525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.041 [2024-11-27 00:50:08.780466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.780519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.041 [2024-11-27 00:50:08.780531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.886 ms 00:27:32.041 [2024-11-27 00:50:08.780539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.041 [2024-11-27 00:50:08.780637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.780651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:32.041 [2024-11-27 00:50:08.780660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:32.041 [2024-11-27 00:50:08.780668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.041 [2024-11-27 00:50:08.800160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.800221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.041 [2024-11-27 00:50:08.800235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.431 ms 00:27:32.041 [2024-11-27 00:50:08.800244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.041 [2024-11-27 00:50:08.800292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.800303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.041 [2024-11-27 00:50:08.800318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:32.041 [2024-11-27 00:50:08.800326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.041 [2024-11-27 00:50:08.800958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.041 [2024-11-27 00:50:08.800999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.041 [2024-11-27 00:50:08.801009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:27:32.042 [2024-11-27 00:50:08.801018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.042 [2024-11-27 00:50:08.801182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.042 [2024-11-27 00:50:08.801193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.042 [2024-11-27 00:50:08.801202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:32.042 [2024-11-27 00:50:08.801210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.042 [2024-11-27 00:50:08.809226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.042 [2024-11-27 00:50:08.809276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.042 [2024-11-27 00:50:08.809287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.995 ms 00:27:32.042 [2024-11-27 00:50:08.809296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.042 [2024-11-27 00:50:08.813279] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:32.042 [2024-11-27 00:50:08.813334] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:32.042 [2024-11-27 00:50:08.813352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.042 [2024-11-27 00:50:08.813361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:32.042 [2024-11-27 00:50:08.813370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:27:32.042 [2024-11-27 00:50:08.813378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.304 [2024-11-27 00:50:08.829252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.304 [2024-11-27 00:50:08.829307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:32.304 [2024-11-27 00:50:08.829320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.818 ms 00:27:32.304 [2024-11-27 00:50:08.829328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.832585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.832638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:32.305 [2024-11-27 00:50:08.832649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:27:32.305 [2024-11-27 00:50:08.832656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.835419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.835471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:32.305 [2024-11-27 00:50:08.835493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:27:32.305 [2024-11-27 00:50:08.835500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.835979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.836008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.305 [2024-11-27 00:50:08.836019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:32.305 [2024-11-27 00:50:08.836026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.862717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.862778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:32.305 [2024-11-27 00:50:08.862799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.652 ms 00:27:32.305 [2024-11-27 00:50:08.862808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.871024] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:32.305 [2024-11-27 00:50:08.874145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.874196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.305 [2024-11-27 00:50:08.874208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.270 ms 00:27:32.305 [2024-11-27 00:50:08.874222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.874304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.874316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:32.305 [2024-11-27 00:50:08.874326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:32.305 [2024-11-27 00:50:08.874334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.875153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.875208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.305 [2024-11-27 00:50:08.875219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:27:32.305 [2024-11-27 00:50:08.875227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.875256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.875269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:32.305 [2024-11-27 00:50:08.875277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.305 [2024-11-27 00:50:08.875285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.875327] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:32.305 [2024-11-27 00:50:08.875337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.875349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:32.305 [2024-11-27 00:50:08.875360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:32.305 [2024-11-27 00:50:08.875368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.880714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.880766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:32.305 [2024-11-27 00:50:08.880786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:27:32.305 [2024-11-27 00:50:08.880795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.880892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.305 [2024-11-27 00:50:08.880903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:32.305 [2024-11-27 00:50:08.880913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:32.305 [2024-11-27 00:50:08.880924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.305 [2024-11-27 00:50:08.882113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.635 ms, result 0 00:27:33.695  [2024-11-27T00:50:11.427Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T00:50:12.370Z] Copying: 42/1024 [MB] (20 MBps) [2024-11-27T00:50:13.315Z] Copying: 55/1024 [MB] (13 MBps) [2024-11-27T00:50:14.262Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-27T00:50:15.208Z] Copying: 80/1024 [MB] (13 MBps) [2024-11-27T00:50:16.153Z] Copying: 93/1024 [MB] (13 MBps) [2024-11-27T00:50:17.097Z] Copying: 107/1024 [MB] (14 MBps) [2024-11-27T00:50:18.485Z] Copying: 126/1024 [MB] (19 MBps) [2024-11-27T00:50:19.431Z] Copying: 140/1024 [MB] (14 MBps) [2024-11-27T00:50:20.378Z] Copying: 159/1024 [MB] (18 MBps) [2024-11-27T00:50:21.325Z] Copying: 181/1024 [MB] (22 MBps) [2024-11-27T00:50:22.272Z] Copying: 201/1024 [MB] (20 MBps) [2024-11-27T00:50:23.215Z] Copying: 219/1024 [MB] (18 MBps) [2024-11-27T00:50:24.161Z] Copying: 241/1024 [MB] (21 MBps) [2024-11-27T00:50:25.104Z] Copying: 253/1024 [MB] (12 MBps) [2024-11-27T00:50:26.493Z] Copying: 271/1024 [MB] (17 MBps) [2024-11-27T00:50:27.129Z] Copying: 292/1024 [MB] (21 MBps) [2024-11-27T00:50:28.073Z] Copying: 309/1024 [MB] (17 MBps) [2024-11-27T00:50:29.460Z] Copying: 321/1024 [MB] (11 MBps) [2024-11-27T00:50:30.405Z] Copying: 336/1024 [MB] (14 MBps) [2024-11-27T00:50:31.351Z] Copying: 356/1024 [MB] (20 MBps) [2024-11-27T00:50:32.301Z] Copying: 376/1024 [MB] (19 MBps) [2024-11-27T00:50:33.245Z] Copying: 394/1024 [MB] (18 MBps) [2024-11-27T00:50:34.192Z] Copying: 416/1024 [MB] (22 MBps) [2024-11-27T00:50:35.139Z] Copying: 437/1024 [MB] (20 MBps) [2024-11-27T00:50:36.086Z] Copying: 459/1024 [MB] (21 MBps) [2024-11-27T00:50:37.475Z] Copying: 476/1024 [MB] (17 MBps) [2024-11-27T00:50:38.421Z] Copying: 489/1024 [MB] (12 MBps) [2024-11-27T00:50:39.364Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-27T00:50:40.308Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-27T00:50:41.253Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-27T00:50:42.197Z] Copying: 535/1024 [MB] (14 MBps) [2024-11-27T00:50:43.140Z] Copying: 567/1024 [MB] (31 MBps) [2024-11-27T00:50:44.084Z] Copying: 582/1024 [MB] (15 MBps) [2024-11-27T00:50:45.472Z] Copying: 594/1024 [MB] (12 MBps) [2024-11-27T00:50:46.417Z] Copying: 605/1024 [MB] (10 MBps) [2024-11-27T00:50:47.363Z] Copying: 616/1024 [MB] (10 MBps) [2024-11-27T00:50:48.317Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-27T00:50:49.264Z] Copying: 640/1024 [MB] (13 MBps) [2024-11-27T00:50:50.208Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-27T00:50:51.150Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-27T00:50:52.094Z] Copying: 676/1024 [MB] (13 MBps) [2024-11-27T00:50:53.480Z] Copying: 692/1024 [MB] (15 MBps) [2024-11-27T00:50:54.422Z] Copying: 705/1024 [MB] (12 MBps) [2024-11-27T00:50:55.368Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-27T00:50:56.319Z] Copying: 733/1024 [MB] (16 MBps) [2024-11-27T00:50:57.305Z] Copying: 756/1024 [MB] (22 MBps) [2024-11-27T00:50:58.237Z] Copying: 786/1024 [MB] (30 MBps) [2024-11-27T00:50:59.170Z] Copying: 811/1024 [MB] (25 MBps) [2024-11-27T00:51:00.101Z] Copying: 836/1024 [MB] (24 MBps) [2024-11-27T00:51:01.474Z] Copying: 859/1024 [MB] (22 MBps) [2024-11-27T00:51:02.407Z] Copying: 891/1024 [MB] (31 MBps) [2024-11-27T00:51:03.340Z] Copying: 918/1024 [MB] (27 MBps) [2024-11-27T00:51:04.274Z] Copying: 946/1024 [MB] (27 MBps) [2024-11-27T00:51:05.208Z] Copying: 985/1024 [MB] (38 MBps) [2024-11-27T00:51:05.775Z] Copying: 1006/1024 [MB] (21 MBps) [2024-11-27T00:51:06.344Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 00:51:06.279788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.279851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:29.557 [2024-11-27 00:51:06.279886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:29.557 [2024-11-27 00:51:06.279894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.279918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:29.557 [2024-11-27 00:51:06.280373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.280397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:29.557 [2024-11-27 00:51:06.280407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:28:29.557 [2024-11-27 00:51:06.280414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.280633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.280643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:29.557 [2024-11-27 00:51:06.280652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:28:29.557 [2024-11-27 00:51:06.280664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.284112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.284134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:29.557 [2024-11-27 00:51:06.284145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:28:29.557 [2024-11-27 00:51:06.284152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.291800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.291832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:29.557 [2024-11-27 00:51:06.291843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:28:29.557 [2024-11-27 00:51:06.291863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.294221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.294258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:29.557 [2024-11-27 00:51:06.294268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:28:29.557 [2024-11-27 00:51:06.294274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.298001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.298038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:29.557 [2024-11-27 00:51:06.298047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:28:29.557 [2024-11-27 00:51:06.298055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.301724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.301755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:29.557 [2024-11-27 00:51:06.301774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:28:29.557 [2024-11-27 00:51:06.301785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.304903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.304935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:29.557 [2024-11-27 00:51:06.304944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.091 ms 00:28:29.557 [2024-11-27 00:51:06.304951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.307229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.307263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:29.557 [2024-11-27 00:51:06.307271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:28:29.557 [2024-11-27 00:51:06.307277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.309932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.309965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:29.557 [2024-11-27 00:51:06.309974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:28:29.557 [2024-11-27 00:51:06.309980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.311625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.557 [2024-11-27 00:51:06.311659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:29.557 [2024-11-27 00:51:06.311668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:28:29.557 [2024-11-27 00:51:06.311675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.557 [2024-11-27 00:51:06.311691] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:29.557 [2024-11-27 00:51:06.311705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:29.557 [2024-11-27 00:51:06.311715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:29.557 [2024-11-27 00:51:06.311723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:29.557 [2024-11-27 00:51:06.311730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.311995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:29.558 [2024-11-27 00:51:06.312400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:29.559 [2024-11-27 00:51:06.312469] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:29.559 [2024-11-27 00:51:06.312477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a9010db2-f580-4033-9b44-da706c853bb8 00:28:29.559 [2024-11-27 00:51:06.312485] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:29.559 [2024-11-27 00:51:06.312492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:29.559 [2024-11-27 00:51:06.312500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:29.559 [2024-11-27 00:51:06.312507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:29.559 [2024-11-27 00:51:06.312514] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:29.559 [2024-11-27 00:51:06.312522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:29.559 [2024-11-27 00:51:06.312538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:29.559 [2024-11-27 00:51:06.312550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:29.559 [2024-11-27 00:51:06.312556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:29.559 [2024-11-27 00:51:06.312563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.559 [2024-11-27 00:51:06.312571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:29.559 [2024-11-27 00:51:06.312579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:28:29.559 [2024-11-27 00:51:06.312586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.314211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.559 [2024-11-27 00:51:06.314237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:29.559 [2024-11-27 00:51:06.314245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:28:29.559 [2024-11-27 00:51:06.314252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.314355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.559 [2024-11-27 00:51:06.314364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:29.559 [2024-11-27 00:51:06.314372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:29.559 [2024-11-27 00:51:06.314379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.319475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.319510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.559 [2024-11-27 00:51:06.319523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.319531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.319578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.319585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.559 [2024-11-27 00:51:06.319598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.319606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.319640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.319650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.559 [2024-11-27 00:51:06.319658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.319667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.319681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.319689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.559 [2024-11-27 00:51:06.319696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.319703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.328977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.329021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.559 [2024-11-27 00:51:06.329033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.329047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.559 [2024-11-27 00:51:06.336304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:29.559 [2024-11-27 00:51:06.336370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:29.559 [2024-11-27 00:51:06.336421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:29.559 [2024-11-27 00:51:06.336508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:29.559 [2024-11-27 00:51:06.336567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:29.559 [2024-11-27 00:51:06.336623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.559 [2024-11-27 00:51:06.336683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:29.559 [2024-11-27 00:51:06.336690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.559 [2024-11-27 00:51:06.336698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.559 [2024-11-27 00:51:06.336806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.994 ms, result 0 00:28:29.818 00:28:29.818 00:28:29.818 00:51:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:32.361 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:32.361 Process with pid 91267 is not found 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91267 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91267 ']' 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91267 00:28:32.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91267) - No such process 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91267 is not found' 00:28:32.361 00:51:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:32.622 Remove shared memory files 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:32.622 00:28:32.622 real 3m46.837s 00:28:32.622 user 4m11.140s 00:28:32.622 sys 0m27.958s 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.622 ************************************ 00:28:32.622 END TEST ftl_dirty_shutdown 00:28:32.622 ************************************ 00:28:32.622 00:51:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.622 00:51:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:32.622 00:51:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:32.622 00:51:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.622 00:51:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:32.622 ************************************ 00:28:32.622 START TEST ftl_upgrade_shutdown 00:28:32.622 ************************************ 00:28:32.622 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:32.622 * Looking for test storage... 00:28:32.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.622 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:32.622 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:32.622 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.884 --rc genhtml_branch_coverage=1 00:28:32.884 --rc genhtml_function_coverage=1 00:28:32.884 --rc genhtml_legend=1 00:28:32.884 --rc geninfo_all_blocks=1 00:28:32.884 --rc geninfo_unexecuted_blocks=1 00:28:32.884 00:28:32.884 ' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.884 --rc genhtml_branch_coverage=1 00:28:32.884 --rc genhtml_function_coverage=1 00:28:32.884 --rc genhtml_legend=1 00:28:32.884 --rc geninfo_all_blocks=1 00:28:32.884 --rc geninfo_unexecuted_blocks=1 00:28:32.884 00:28:32.884 ' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.884 --rc genhtml_branch_coverage=1 00:28:32.884 --rc genhtml_function_coverage=1 00:28:32.884 --rc genhtml_legend=1 00:28:32.884 --rc geninfo_all_blocks=1 00:28:32.884 --rc geninfo_unexecuted_blocks=1 00:28:32.884 00:28:32.884 ' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:32.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.884 --rc genhtml_branch_coverage=1 00:28:32.884 --rc genhtml_function_coverage=1 00:28:32.884 --rc genhtml_legend=1 00:28:32.884 --rc geninfo_all_blocks=1 00:28:32.884 --rc geninfo_unexecuted_blocks=1 00:28:32.884 00:28:32.884 ' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:32.884 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93775 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93775 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93775 ']' 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.885 00:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:32.885 [2024-11-27 00:51:09.543755] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:32.885 [2024-11-27 00:51:09.543920] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93775 ] 00:28:33.146 [2024-11-27 00:51:09.708190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.146 [2024-11-27 00:51:09.741117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.714 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:33.715 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:33.975 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:34.236 { 00:28:34.236 "name": "basen1", 00:28:34.236 "aliases": [ 00:28:34.236 "89729366-f22b-4d96-925f-40dce4b7f544" 00:28:34.236 ], 00:28:34.236 "product_name": "NVMe disk", 00:28:34.236 "block_size": 4096, 00:28:34.236 "num_blocks": 1310720, 00:28:34.236 "uuid": "89729366-f22b-4d96-925f-40dce4b7f544", 00:28:34.236 "numa_id": -1, 00:28:34.236 "assigned_rate_limits": { 00:28:34.236 "rw_ios_per_sec": 0, 00:28:34.236 "rw_mbytes_per_sec": 0, 00:28:34.236 "r_mbytes_per_sec": 0, 00:28:34.236 "w_mbytes_per_sec": 0 00:28:34.236 }, 00:28:34.236 "claimed": true, 00:28:34.236 "claim_type": "read_many_write_one", 00:28:34.236 "zoned": false, 00:28:34.236 "supported_io_types": { 00:28:34.236 "read": true, 00:28:34.236 "write": true, 00:28:34.236 "unmap": true, 00:28:34.236 "flush": true, 00:28:34.236 "reset": true, 00:28:34.236 "nvme_admin": true, 00:28:34.236 "nvme_io": true, 00:28:34.236 "nvme_io_md": false, 00:28:34.236 "write_zeroes": true, 00:28:34.236 "zcopy": false, 00:28:34.236 "get_zone_info": false, 00:28:34.236 "zone_management": false, 00:28:34.236 "zone_append": false, 00:28:34.236 "compare": true, 00:28:34.236 "compare_and_write": false, 00:28:34.236 "abort": true, 00:28:34.236 "seek_hole": false, 00:28:34.236 "seek_data": false, 00:28:34.236 "copy": true, 00:28:34.236 "nvme_iov_md": false 00:28:34.236 }, 00:28:34.236 "driver_specific": { 00:28:34.236 "nvme": [ 00:28:34.236 { 00:28:34.236 "pci_address": "0000:00:11.0", 00:28:34.236 "trid": { 00:28:34.236 "trtype": "PCIe", 00:28:34.236 "traddr": "0000:00:11.0" 00:28:34.236 }, 00:28:34.236 "ctrlr_data": { 00:28:34.236 "cntlid": 0, 00:28:34.236 "vendor_id": "0x1b36", 00:28:34.236 "model_number": "QEMU NVMe Ctrl", 00:28:34.236 "serial_number": "12341", 00:28:34.236 "firmware_revision": "8.0.0", 00:28:34.236 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:34.236 "oacs": { 00:28:34.236 "security": 0, 00:28:34.236 "format": 1, 00:28:34.236 "firmware": 0, 00:28:34.236 "ns_manage": 1 00:28:34.236 }, 00:28:34.236 "multi_ctrlr": false, 00:28:34.236 "ana_reporting": false 00:28:34.236 }, 00:28:34.236 "vs": { 00:28:34.236 "nvme_version": "1.4" 00:28:34.236 }, 00:28:34.236 "ns_data": { 00:28:34.236 "id": 1, 00:28:34.236 "can_share": false 00:28:34.236 } 00:28:34.236 } 00:28:34.236 ], 00:28:34.236 "mp_policy": "active_passive" 00:28:34.236 } 00:28:34.236 } 00:28:34.236 ]' 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:34.236 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:34.237 00:51:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:34.498 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=c8c61ace-5ca4-4057-8bce-d701a6a6128c 00:28:34.498 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:34.498 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c8c61ace-5ca4-4057-8bce-d701a6a6128c 00:28:34.760 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:35.022 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=bdb786d7-a5a4-4119-8283-ad6a256512d1 00:28:35.022 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u bdb786d7-a5a4-4119-8283-ad6a256512d1 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b2638fb8-be12-4986-a09c-2d16ca63460e 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b2638fb8-be12-4986-a09c-2d16ca63460e ]] 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b2638fb8-be12-4986-a09c-2d16ca63460e 5120 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:35.283 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b2638fb8-be12-4986-a09c-2d16ca63460e 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b2638fb8-be12-4986-a09c-2d16ca63460e 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b2638fb8-be12-4986-a09c-2d16ca63460e 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:35.284 00:51:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b2638fb8-be12-4986-a09c-2d16ca63460e 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:35.545 { 00:28:35.545 "name": "b2638fb8-be12-4986-a09c-2d16ca63460e", 00:28:35.545 "aliases": [ 00:28:35.545 "lvs/basen1p0" 00:28:35.545 ], 00:28:35.545 "product_name": "Logical Volume", 00:28:35.545 "block_size": 4096, 00:28:35.545 "num_blocks": 5242880, 00:28:35.545 "uuid": "b2638fb8-be12-4986-a09c-2d16ca63460e", 00:28:35.545 "assigned_rate_limits": { 00:28:35.545 "rw_ios_per_sec": 0, 00:28:35.545 "rw_mbytes_per_sec": 0, 00:28:35.545 "r_mbytes_per_sec": 0, 00:28:35.545 "w_mbytes_per_sec": 0 00:28:35.545 }, 00:28:35.545 "claimed": false, 00:28:35.545 "zoned": false, 00:28:35.545 "supported_io_types": { 00:28:35.545 "read": true, 00:28:35.545 "write": true, 00:28:35.545 "unmap": true, 00:28:35.545 "flush": false, 00:28:35.545 "reset": true, 00:28:35.545 "nvme_admin": false, 00:28:35.545 "nvme_io": false, 00:28:35.545 "nvme_io_md": false, 00:28:35.545 "write_zeroes": true, 00:28:35.545 "zcopy": false, 00:28:35.545 "get_zone_info": false, 00:28:35.545 "zone_management": false, 00:28:35.545 "zone_append": false, 00:28:35.545 "compare": false, 00:28:35.545 "compare_and_write": false, 00:28:35.545 "abort": false, 00:28:35.545 "seek_hole": true, 00:28:35.545 "seek_data": true, 00:28:35.545 "copy": false, 00:28:35.545 "nvme_iov_md": false 00:28:35.545 }, 00:28:35.545 "driver_specific": { 00:28:35.545 "lvol": { 00:28:35.545 "lvol_store_uuid": "bdb786d7-a5a4-4119-8283-ad6a256512d1", 00:28:35.545 "base_bdev": "basen1", 00:28:35.545 "thin_provision": true, 00:28:35.545 "num_allocated_clusters": 0, 00:28:35.545 "snapshot": false, 00:28:35.545 "clone": false, 00:28:35.545 "esnap_clone": false 00:28:35.545 } 00:28:35.545 } 00:28:35.545 } 00:28:35.545 ]' 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:35.545 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:35.808 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:35.808 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:35.808 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:36.071 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:36.071 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:36.071 00:51:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b2638fb8-be12-4986-a09c-2d16ca63460e -c cachen1p0 --l2p_dram_limit 2 00:28:36.071 [2024-11-27 00:51:12.843783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.843818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:36.071 [2024-11-27 00:51:12.843829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:36.071 [2024-11-27 00:51:12.843836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.843886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.843896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:36.071 [2024-11-27 00:51:12.843903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:36.071 [2024-11-27 00:51:12.843911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.843926] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:36.071 [2024-11-27 00:51:12.844112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:36.071 [2024-11-27 00:51:12.844125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.844133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:36.071 [2024-11-27 00:51:12.844139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:28:36.071 [2024-11-27 00:51:12.844147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.844193] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a1e7c5cb-6d2c-4bf8-94ed-10f405d05c08 00:28:36.071 [2024-11-27 00:51:12.845207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.845231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:36.071 [2024-11-27 00:51:12.845244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:36.071 [2024-11-27 00:51:12.845251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.849977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.849993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:36.071 [2024-11-27 00:51:12.850003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.691 ms 00:28:36.071 [2024-11-27 00:51:12.850009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.850043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.850050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:36.071 [2024-11-27 00:51:12.850058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:36.071 [2024-11-27 00:51:12.850064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.850104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.850111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:36.071 [2024-11-27 00:51:12.850118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:36.071 [2024-11-27 00:51:12.850124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.850141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:36.071 [2024-11-27 00:51:12.851394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.851413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:36.071 [2024-11-27 00:51:12.851420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.258 ms 00:28:36.071 [2024-11-27 00:51:12.851427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.851446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.851457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:36.071 [2024-11-27 00:51:12.851463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:36.071 [2024-11-27 00:51:12.851474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.851489] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:36.071 [2024-11-27 00:51:12.851599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:36.071 [2024-11-27 00:51:12.851610] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:36.071 [2024-11-27 00:51:12.851619] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:36.071 [2024-11-27 00:51:12.851630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:36.071 [2024-11-27 00:51:12.851643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:36.071 [2024-11-27 00:51:12.851653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:36.071 [2024-11-27 00:51:12.851661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:36.071 [2024-11-27 00:51:12.851666] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:36.071 [2024-11-27 00:51:12.851673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:36.071 [2024-11-27 00:51:12.851680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.851688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:36.071 [2024-11-27 00:51:12.851694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:36.071 [2024-11-27 00:51:12.851701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.851763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.071 [2024-11-27 00:51:12.851772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:36.071 [2024-11-27 00:51:12.851779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:36.071 [2024-11-27 00:51:12.851787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.071 [2024-11-27 00:51:12.851869] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:36.071 [2024-11-27 00:51:12.851880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:36.071 [2024-11-27 00:51:12.851887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:36.071 [2024-11-27 00:51:12.851895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:36.071 [2024-11-27 00:51:12.851910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:36.071 [2024-11-27 00:51:12.851923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:36.071 [2024-11-27 00:51:12.851928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:36.071 [2024-11-27 00:51:12.851935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:36.071 [2024-11-27 00:51:12.851947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:36.071 [2024-11-27 00:51:12.851952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:36.071 [2024-11-27 00:51:12.851965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:36.071 [2024-11-27 00:51:12.851972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:36.071 [2024-11-27 00:51:12.851983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:36.071 [2024-11-27 00:51:12.851990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.071 [2024-11-27 00:51:12.851997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:36.071 [2024-11-27 00:51:12.852002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:36.071 [2024-11-27 00:51:12.852009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:36.071 [2024-11-27 00:51:12.852014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:36.071 [2024-11-27 00:51:12.852020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:36.071 [2024-11-27 00:51:12.852026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:36.071 [2024-11-27 00:51:12.852032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:36.071 [2024-11-27 00:51:12.852037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:36.071 [2024-11-27 00:51:12.852044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:36.071 [2024-11-27 00:51:12.852049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:36.071 [2024-11-27 00:51:12.852058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:36.072 [2024-11-27 00:51:12.852063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:36.072 [2024-11-27 00:51:12.852070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:36.072 [2024-11-27 00:51:12.852075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:36.072 [2024-11-27 00:51:12.852080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:36.072 [2024-11-27 00:51:12.852092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:36.072 [2024-11-27 00:51:12.852097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:36.072 [2024-11-27 00:51:12.852109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:36.072 [2024-11-27 00:51:12.852127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:36.072 [2024-11-27 00:51:12.852132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852138] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:36.072 [2024-11-27 00:51:12.852143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:36.072 [2024-11-27 00:51:12.852151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:36.072 [2024-11-27 00:51:12.852157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:36.072 [2024-11-27 00:51:12.852168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:36.072 [2024-11-27 00:51:12.852173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:36.072 [2024-11-27 00:51:12.852180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:36.072 [2024-11-27 00:51:12.852186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:36.072 [2024-11-27 00:51:12.852193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:36.072 [2024-11-27 00:51:12.852198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:36.072 [2024-11-27 00:51:12.852206] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:36.072 [2024-11-27 00:51:12.852214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:36.072 [2024-11-27 00:51:12.852227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:36.072 [2024-11-27 00:51:12.852246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:36.072 [2024-11-27 00:51:12.852252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:36.072 [2024-11-27 00:51:12.852259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:36.072 [2024-11-27 00:51:12.852264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:36.072 [2024-11-27 00:51:12.852308] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:36.072 [2024-11-27 00:51:12.852315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.072 [2024-11-27 00:51:12.852327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:36.072 [2024-11-27 00:51:12.852334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:36.072 [2024-11-27 00:51:12.852340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:36.072 [2024-11-27 00:51:12.852348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.072 [2024-11-27 00:51:12.852354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:36.072 [2024-11-27 00:51:12.852363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:28:36.072 [2024-11-27 00:51:12.852369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.072 [2024-11-27 00:51:12.852399] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:36.072 [2024-11-27 00:51:12.852406] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:40.281 [2024-11-27 00:51:16.472563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.472648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:40.281 [2024-11-27 00:51:16.472681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3620.138 ms 00:28:40.281 [2024-11-27 00:51:16.472710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.480946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.480980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:40.281 [2024-11-27 00:51:16.480994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.001 ms 00:28:40.281 [2024-11-27 00:51:16.481005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.481060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.481070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:40.281 [2024-11-27 00:51:16.481082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:40.281 [2024-11-27 00:51:16.481089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.489405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.489436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:40.281 [2024-11-27 00:51:16.489448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.284 ms 00:28:40.281 [2024-11-27 00:51:16.489458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.489485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.489494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:40.281 [2024-11-27 00:51:16.489503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:40.281 [2024-11-27 00:51:16.489510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.489866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.489890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:40.281 [2024-11-27 00:51:16.489902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:28:40.281 [2024-11-27 00:51:16.489910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.489967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.489980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:40.281 [2024-11-27 00:51:16.489990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:40.281 [2024-11-27 00:51:16.490001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.495242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.495273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:40.281 [2024-11-27 00:51:16.495285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.221 ms 00:28:40.281 [2024-11-27 00:51:16.495293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.512538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:40.281 [2024-11-27 00:51:16.513510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.513546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:40.281 [2024-11-27 00:51:16.513559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.165 ms 00:28:40.281 [2024-11-27 00:51:16.513572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.528147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.528184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:40.281 [2024-11-27 00:51:16.528195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.537 ms 00:28:40.281 [2024-11-27 00:51:16.528206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.528283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.528295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:40.281 [2024-11-27 00:51:16.528304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:40.281 [2024-11-27 00:51:16.528313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.531624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.531656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:40.281 [2024-11-27 00:51:16.531667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.293 ms 00:28:40.281 [2024-11-27 00:51:16.531676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.534945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.534974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:40.281 [2024-11-27 00:51:16.534983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.238 ms 00:28:40.281 [2024-11-27 00:51:16.534991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.535273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.535294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:40.281 [2024-11-27 00:51:16.535303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:28:40.281 [2024-11-27 00:51:16.535313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.567446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.567480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:40.281 [2024-11-27 00:51:16.567492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.103 ms 00:28:40.281 [2024-11-27 00:51:16.567505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.571697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.571730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:40.281 [2024-11-27 00:51:16.571739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.149 ms 00:28:40.281 [2024-11-27 00:51:16.571748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.281 [2024-11-27 00:51:16.575502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.281 [2024-11-27 00:51:16.575536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:40.281 [2024-11-27 00:51:16.575545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.722 ms 00:28:40.281 [2024-11-27 00:51:16.575553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.282 [2024-11-27 00:51:16.579543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.282 [2024-11-27 00:51:16.579574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:40.282 [2024-11-27 00:51:16.579583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.959 ms 00:28:40.282 [2024-11-27 00:51:16.579593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.282 [2024-11-27 00:51:16.579627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.282 [2024-11-27 00:51:16.579638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:40.282 [2024-11-27 00:51:16.579646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:40.282 [2024-11-27 00:51:16.579655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.282 [2024-11-27 00:51:16.579713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.282 [2024-11-27 00:51:16.579724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:40.282 [2024-11-27 00:51:16.579736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:40.282 [2024-11-27 00:51:16.579747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.282 [2024-11-27 00:51:16.582069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3737.351 ms, result 0 00:28:40.282 { 00:28:40.282 "name": "ftl", 00:28:40.282 "uuid": "a1e7c5cb-6d2c-4bf8-94ed-10f405d05c08" 00:28:40.282 } 00:28:40.282 00:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:40.282 [2024-11-27 00:51:16.795037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.282 00:51:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:40.282 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:40.541 [2024-11-27 00:51:17.195439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:40.541 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:40.799 [2024-11-27 00:51:17.399793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:40.799 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:41.058 Fill FTL, iteration 1 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93896 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93896 /var/tmp/spdk.tgt.sock 00:28:41.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93896 ']' 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.058 00:51:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:41.058 [2024-11-27 00:51:17.824130] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:41.058 [2024-11-27 00:51:17.824248] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93896 ] 00:28:41.317 [2024-11-27 00:51:17.981147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.317 [2024-11-27 00:51:17.999014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.884 00:51:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.884 00:51:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:41.884 00:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:42.142 ftln1 00:28:42.142 00:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:42.142 00:51:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93896 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93896 ']' 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93896 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93896 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:42.401 killing process with pid 93896 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93896' 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93896 00:28:42.401 00:51:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93896 00:28:42.658 00:51:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:42.658 00:51:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:42.916 [2024-11-27 00:51:19.452128] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:42.916 [2024-11-27 00:51:19.452245] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93932 ] 00:28:42.916 [2024-11-27 00:51:19.610260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.916 [2024-11-27 00:51:19.628153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.292  [2024-11-27T00:51:22.024Z] Copying: 197/1024 [MB] (197 MBps) [2024-11-27T00:51:22.969Z] Copying: 389/1024 [MB] (192 MBps) [2024-11-27T00:51:23.910Z] Copying: 568/1024 [MB] (179 MBps) [2024-11-27T00:51:24.846Z] Copying: 777/1024 [MB] (209 MBps) [2024-11-27T00:51:24.847Z] Copying: 1019/1024 [MB] (242 MBps) [2024-11-27T00:51:25.110Z] Copying: 1024/1024 [MB] (average 203 MBps) 00:28:48.323 00:28:48.323 Calculate MD5 checksum, iteration 1 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:48.323 00:51:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:48.323 [2024-11-27 00:51:25.086985] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:48.323 [2024-11-27 00:51:25.087098] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94002 ] 00:28:48.628 [2024-11-27 00:51:25.240281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.628 [2024-11-27 00:51:25.266520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.030  [2024-11-27T00:51:27.386Z] Copying: 627/1024 [MB] (627 MBps) [2024-11-27T00:51:27.386Z] Copying: 1024/1024 [MB] (average 613 MBps) 00:28:50.599 00:28:50.599 00:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:50.599 00:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=68c77f7ba89855816859ae27ab57a0c5 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:53.142 Fill FTL, iteration 2 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:53.142 00:51:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:53.142 [2024-11-27 00:51:29.556798] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:53.142 [2024-11-27 00:51:29.556923] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94056 ] 00:28:53.142 [2024-11-27 00:51:29.710622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.142 [2024-11-27 00:51:29.734484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.529  [2024-11-27T00:51:32.256Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-27T00:51:33.189Z] Copying: 360/1024 [MB] (174 MBps) [2024-11-27T00:51:34.124Z] Copying: 599/1024 [MB] (239 MBps) [2024-11-27T00:51:34.692Z] Copying: 843/1024 [MB] (244 MBps) [2024-11-27T00:51:34.952Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:28:58.165 00:28:58.165 Calculate MD5 checksum, iteration 2 00:28:58.165 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:58.166 00:51:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:58.166 [2024-11-27 00:51:34.895822] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:28:58.166 [2024-11-27 00:51:34.896633] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94120 ] 00:28:58.425 [2024-11-27 00:51:35.046398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.425 [2024-11-27 00:51:35.069202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.799  [2024-11-27T00:51:37.155Z] Copying: 625/1024 [MB] (625 MBps) [2024-11-27T00:51:38.097Z] Copying: 1024/1024 [MB] (average 607 MBps) 00:29:01.310 00:29:01.310 00:51:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:01.310 00:51:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:03.224 00:51:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:03.224 00:51:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f48933a22f2455048bf196e7b9db50f8 00:29:03.224 00:51:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:03.224 00:51:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:03.224 00:51:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:03.484 [2024-11-27 00:51:40.077271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.484 [2024-11-27 00:51:40.077311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:03.484 [2024-11-27 00:51:40.077322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:03.484 [2024-11-27 00:51:40.077334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.484 [2024-11-27 00:51:40.077353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.484 [2024-11-27 00:51:40.077360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:03.484 [2024-11-27 00:51:40.077367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:03.484 [2024-11-27 00:51:40.077373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.484 [2024-11-27 00:51:40.077388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.484 [2024-11-27 00:51:40.077394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:03.484 [2024-11-27 00:51:40.077403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:03.484 [2024-11-27 00:51:40.077408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.484 [2024-11-27 00:51:40.077458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.174 ms, result 0 00:29:03.484 true 00:29:03.484 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:03.746 { 00:29:03.746 "name": "ftl", 00:29:03.746 "properties": [ 00:29:03.746 { 00:29:03.746 "name": "superblock_version", 00:29:03.746 "value": 5, 00:29:03.746 "read-only": true 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "name": "base_device", 00:29:03.746 "bands": [ 00:29:03.746 { 00:29:03.746 "id": 0, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 1, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 2, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 3, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 4, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 5, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 6, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 7, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.746 { 00:29:03.746 "id": 8, 00:29:03.746 "state": "FREE", 00:29:03.746 "validity": 0.0 00:29:03.746 }, 00:29:03.747 { 00:29:03.747 "id": 9, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 10, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 11, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 12, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 13, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 14, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 15, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 16, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 17, 00:29:03.747 "state": "FREE", 00:29:03.747 "validity": 0.0 00:29:03.747 } 00:29:03.747 ], 00:29:03.747 "read-only": true 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "name": "cache_device", 00:29:03.747 "type": "bdev", 00:29:03.747 "chunks": [ 00:29:03.747 { 00:29:03.747 "id": 0, 00:29:03.747 "state": "INACTIVE", 00:29:03.747 "utilization": 0.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 1, 00:29:03.747 "state": "CLOSED", 00:29:03.747 "utilization": 1.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 2, 00:29:03.747 "state": "CLOSED", 00:29:03.747 "utilization": 1.0 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 3, 00:29:03.747 "state": "OPEN", 00:29:03.747 "utilization": 0.001953125 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "id": 4, 00:29:03.747 "state": "OPEN", 00:29:03.747 "utilization": 0.0 00:29:03.747 } 00:29:03.747 ], 00:29:03.747 "read-only": true 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "name": "verbose_mode", 00:29:03.747 "value": true, 00:29:03.747 "unit": "", 00:29:03.747 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:03.747 }, 00:29:03.747 { 00:29:03.747 "name": "prep_upgrade_on_shutdown", 00:29:03.747 "value": false, 00:29:03.747 "unit": "", 00:29:03.747 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:03.747 } 00:29:03.747 ] 00:29:03.747 } 00:29:03.747 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:03.747 [2024-11-27 00:51:40.465594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.747 [2024-11-27 00:51:40.465630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:03.747 [2024-11-27 00:51:40.465640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:03.747 [2024-11-27 00:51:40.465645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.747 [2024-11-27 00:51:40.465663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.747 [2024-11-27 00:51:40.465669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:03.747 [2024-11-27 00:51:40.465675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:03.747 [2024-11-27 00:51:40.465680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.747 [2024-11-27 00:51:40.465695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:03.747 [2024-11-27 00:51:40.465700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:03.747 [2024-11-27 00:51:40.465706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:03.747 [2024-11-27 00:51:40.465711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:03.747 [2024-11-27 00:51:40.465763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:29:03.747 true 00:29:03.747 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:03.747 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:03.747 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:04.008 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:04.008 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:04.008 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:04.269 [2024-11-27 00:51:40.869947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.269 [2024-11-27 00:51:40.869977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:04.269 [2024-11-27 00:51:40.869987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:04.269 [2024-11-27 00:51:40.869993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.269 [2024-11-27 00:51:40.870010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.269 [2024-11-27 00:51:40.870017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:04.270 [2024-11-27 00:51:40.870023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:04.270 [2024-11-27 00:51:40.870028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.270 [2024-11-27 00:51:40.870043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.270 [2024-11-27 00:51:40.870049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:04.270 [2024-11-27 00:51:40.870055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:04.270 [2024-11-27 00:51:40.870060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.270 [2024-11-27 00:51:40.870103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:29:04.270 true 00:29:04.270 00:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:04.270 { 00:29:04.270 "name": "ftl", 00:29:04.270 "properties": [ 00:29:04.270 { 00:29:04.270 "name": "superblock_version", 00:29:04.270 "value": 5, 00:29:04.270 "read-only": true 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "name": "base_device", 00:29:04.270 "bands": [ 00:29:04.270 { 00:29:04.270 "id": 0, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 1, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 2, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 3, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 4, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 5, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 6, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 7, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 8, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 9, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 10, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 11, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 12, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 13, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 14, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 15, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 16, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 17, 00:29:04.270 "state": "FREE", 00:29:04.270 "validity": 0.0 00:29:04.270 } 00:29:04.270 ], 00:29:04.270 "read-only": true 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "name": "cache_device", 00:29:04.270 "type": "bdev", 00:29:04.270 "chunks": [ 00:29:04.270 { 00:29:04.270 "id": 0, 00:29:04.270 "state": "INACTIVE", 00:29:04.270 "utilization": 0.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 1, 00:29:04.270 "state": "CLOSED", 00:29:04.270 "utilization": 1.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 2, 00:29:04.270 "state": "CLOSED", 00:29:04.270 "utilization": 1.0 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 3, 00:29:04.270 "state": "OPEN", 00:29:04.270 "utilization": 0.001953125 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "id": 4, 00:29:04.270 "state": "OPEN", 00:29:04.270 "utilization": 0.0 00:29:04.270 } 00:29:04.270 ], 00:29:04.270 "read-only": true 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "name": "verbose_mode", 00:29:04.270 "value": true, 00:29:04.270 "unit": "", 00:29:04.270 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:04.270 }, 00:29:04.270 { 00:29:04.270 "name": "prep_upgrade_on_shutdown", 00:29:04.270 "value": true, 00:29:04.270 "unit": "", 00:29:04.270 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:04.270 } 00:29:04.270 ] 00:29:04.270 } 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93775 ]] 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93775 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93775 ']' 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93775 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.270 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93775 00:29:04.532 killing process with pid 93775 00:29:04.532 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.532 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.532 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93775' 00:29:04.532 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93775 00:29:04.532 00:51:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93775 00:29:04.532 [2024-11-27 00:51:41.150241] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:04.532 [2024-11-27 00:51:41.153192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.532 [2024-11-27 00:51:41.153224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:04.532 [2024-11-27 00:51:41.153233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:04.532 [2024-11-27 00:51:41.153239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.532 [2024-11-27 00:51:41.153256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:04.532 [2024-11-27 00:51:41.153643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.532 [2024-11-27 00:51:41.153669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:04.532 [2024-11-27 00:51:41.153676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:29:04.532 [2024-11-27 00:51:41.153682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.650500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.650553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:12.680 [2024-11-27 00:51:48.650565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7496.773 ms 00:29:12.680 [2024-11-27 00:51:48.650572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.651596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.651624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:12.680 [2024-11-27 00:51:48.651632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.012 ms 00:29:12.680 [2024-11-27 00:51:48.651638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.652498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.652518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:12.680 [2024-11-27 00:51:48.652529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:29:12.680 [2024-11-27 00:51:48.652535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.654108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.654138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:12.680 [2024-11-27 00:51:48.654145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.534 ms 00:29:12.680 [2024-11-27 00:51:48.654152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.656257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.656287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:12.680 [2024-11-27 00:51:48.656295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.079 ms 00:29:12.680 [2024-11-27 00:51:48.656301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.656358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.656365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:12.680 [2024-11-27 00:51:48.656378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:12.680 [2024-11-27 00:51:48.656384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.657602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.657630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:12.680 [2024-11-27 00:51:48.657638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.206 ms 00:29:12.680 [2024-11-27 00:51:48.657643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.658891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.658918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:12.680 [2024-11-27 00:51:48.658925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.224 ms 00:29:12.680 [2024-11-27 00:51:48.658931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.659889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.659916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:12.680 [2024-11-27 00:51:48.659923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:29:12.680 [2024-11-27 00:51:48.659929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.661017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.661044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:12.680 [2024-11-27 00:51:48.661051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:29:12.680 [2024-11-27 00:51:48.661056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.661080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:12.680 [2024-11-27 00:51:48.661091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:12.680 [2024-11-27 00:51:48.661098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:12.680 [2024-11-27 00:51:48.661104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:12.680 [2024-11-27 00:51:48.661111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.680 [2024-11-27 00:51:48.661200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:12.680 [2024-11-27 00:51:48.661205] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a1e7c5cb-6d2c-4bf8-94ed-10f405d05c08 00:29:12.680 [2024-11-27 00:51:48.661212] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:12.680 [2024-11-27 00:51:48.661217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:12.680 [2024-11-27 00:51:48.661226] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:12.680 [2024-11-27 00:51:48.661232] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:12.680 [2024-11-27 00:51:48.661238] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:12.680 [2024-11-27 00:51:48.661245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:12.680 [2024-11-27 00:51:48.661251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:12.680 [2024-11-27 00:51:48.661256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:12.680 [2024-11-27 00:51:48.661261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:12.680 [2024-11-27 00:51:48.661268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.661274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:12.680 [2024-11-27 00:51:48.661280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:29:12.680 [2024-11-27 00:51:48.661286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.680 [2024-11-27 00:51:48.662571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.680 [2024-11-27 00:51:48.662598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:12.680 [2024-11-27 00:51:48.662606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.274 ms 00:29:12.680 [2024-11-27 00:51:48.662612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.662680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:12.681 [2024-11-27 00:51:48.662687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:12.681 [2024-11-27 00:51:48.662693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:12.681 [2024-11-27 00:51:48.662699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.667165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.667194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:12.681 [2024-11-27 00:51:48.667201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.667212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.667233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.667240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:12.681 [2024-11-27 00:51:48.667246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.667251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.667287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.667296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:12.681 [2024-11-27 00:51:48.667302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.667308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.667320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.667326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:12.681 [2024-11-27 00:51:48.667333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.667339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.675470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.675506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:12.681 [2024-11-27 00:51:48.675514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.675520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:12.681 [2024-11-27 00:51:48.682319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:12.681 [2024-11-27 00:51:48.682392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:12.681 [2024-11-27 00:51:48.682432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:12.681 [2024-11-27 00:51:48.682501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:12.681 [2024-11-27 00:51:48.682548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:12.681 [2024-11-27 00:51:48.682593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:12.681 [2024-11-27 00:51:48.682644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:12.681 [2024-11-27 00:51:48.682651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:12.681 [2024-11-27 00:51:48.682656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:12.681 [2024-11-27 00:51:48.682747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7529.511 ms, result 0 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94289 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94289 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94289 ']' 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.224 00:51:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.224 [2024-11-27 00:51:51.847460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:15.224 [2024-11-27 00:51:51.847795] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94289 ] 00:29:15.224 [2024-11-27 00:51:52.008447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.486 [2024-11-27 00:51:52.033122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.748 [2024-11-27 00:51:52.289653] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:15.748 [2024-11-27 00:51:52.289714] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:15.748 [2024-11-27 00:51:52.427431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.427464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:15.748 [2024-11-27 00:51:52.427474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.748 [2024-11-27 00:51:52.427480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.427520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.427529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:15.748 [2024-11-27 00:51:52.427535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:15.748 [2024-11-27 00:51:52.427541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.427554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:15.748 [2024-11-27 00:51:52.427732] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:15.748 [2024-11-27 00:51:52.427744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.427750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:15.748 [2024-11-27 00:51:52.427756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:29:15.748 [2024-11-27 00:51:52.427762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.428668] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:15.748 [2024-11-27 00:51:52.430988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.431016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:15.748 [2024-11-27 00:51:52.431023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.322 ms 00:29:15.748 [2024-11-27 00:51:52.431029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.431078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.431085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:15.748 [2024-11-27 00:51:52.431092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:15.748 [2024-11-27 00:51:52.431098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.435451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.435471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:15.748 [2024-11-27 00:51:52.435479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.320 ms 00:29:15.748 [2024-11-27 00:51:52.435485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.435516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.435523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:15.748 [2024-11-27 00:51:52.435532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:15.748 [2024-11-27 00:51:52.435540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.435579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.435587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:15.748 [2024-11-27 00:51:52.435593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.748 [2024-11-27 00:51:52.435599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.435616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:15.748 [2024-11-27 00:51:52.436747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.436767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:15.748 [2024-11-27 00:51:52.436775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:29:15.748 [2024-11-27 00:51:52.436780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.436804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.436811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:15.748 [2024-11-27 00:51:52.436817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.748 [2024-11-27 00:51:52.436823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.436838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:15.748 [2024-11-27 00:51:52.436862] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:15.748 [2024-11-27 00:51:52.436892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:15.748 [2024-11-27 00:51:52.436905] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:15.748 [2024-11-27 00:51:52.436984] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:15.748 [2024-11-27 00:51:52.436993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:15.748 [2024-11-27 00:51:52.437002] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:15.748 [2024-11-27 00:51:52.437009] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:15.748 [2024-11-27 00:51:52.437016] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:15.748 [2024-11-27 00:51:52.437023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:15.748 [2024-11-27 00:51:52.437028] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:15.748 [2024-11-27 00:51:52.437034] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:15.748 [2024-11-27 00:51:52.437039] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:15.748 [2024-11-27 00:51:52.437048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.437055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:15.748 [2024-11-27 00:51:52.437064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:29:15.748 [2024-11-27 00:51:52.437069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.437135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.748 [2024-11-27 00:51:52.437141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:15.748 [2024-11-27 00:51:52.437147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:15.748 [2024-11-27 00:51:52.437152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.748 [2024-11-27 00:51:52.437231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:15.748 [2024-11-27 00:51:52.437245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:15.748 [2024-11-27 00:51:52.437255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:15.748 [2024-11-27 00:51:52.437262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.748 [2024-11-27 00:51:52.437268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:15.748 [2024-11-27 00:51:52.437276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:15.748 [2024-11-27 00:51:52.437282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:15.749 [2024-11-27 00:51:52.437287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:15.749 [2024-11-27 00:51:52.437293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:15.749 [2024-11-27 00:51:52.437298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:15.749 [2024-11-27 00:51:52.437308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:15.749 [2024-11-27 00:51:52.437313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:15.749 [2024-11-27 00:51:52.437327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:15.749 [2024-11-27 00:51:52.437331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:15.749 [2024-11-27 00:51:52.437346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:15.749 [2024-11-27 00:51:52.437351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:15.749 [2024-11-27 00:51:52.437361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:15.749 [2024-11-27 00:51:52.437376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:15.749 [2024-11-27 00:51:52.437391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:15.749 [2024-11-27 00:51:52.437407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:15.749 [2024-11-27 00:51:52.437426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:15.749 [2024-11-27 00:51:52.437443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:15.749 [2024-11-27 00:51:52.437462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:15.749 [2024-11-27 00:51:52.437479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:15.749 [2024-11-27 00:51:52.437485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437491] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:15.749 [2024-11-27 00:51:52.437498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:15.749 [2024-11-27 00:51:52.437504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.749 [2024-11-27 00:51:52.437517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:15.749 [2024-11-27 00:51:52.437524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:15.749 [2024-11-27 00:51:52.437529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:15.749 [2024-11-27 00:51:52.437536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:15.749 [2024-11-27 00:51:52.437542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:15.749 [2024-11-27 00:51:52.437548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:15.749 [2024-11-27 00:51:52.437554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:15.749 [2024-11-27 00:51:52.437562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:15.749 [2024-11-27 00:51:52.437576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:15.749 [2024-11-27 00:51:52.437595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:15.749 [2024-11-27 00:51:52.437601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:15.749 [2024-11-27 00:51:52.437607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:15.749 [2024-11-27 00:51:52.437613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:15.749 [2024-11-27 00:51:52.437661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:15.749 [2024-11-27 00:51:52.437668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:15.749 [2024-11-27 00:51:52.437684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:15.749 [2024-11-27 00:51:52.437690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:15.749 [2024-11-27 00:51:52.437697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:15.749 [2024-11-27 00:51:52.437720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.749 [2024-11-27 00:51:52.437727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:15.749 [2024-11-27 00:51:52.437733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:29:15.749 [2024-11-27 00:51:52.437740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.749 [2024-11-27 00:51:52.437771] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:15.749 [2024-11-27 00:51:52.437780] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:19.046 [2024-11-27 00:51:55.805922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.805975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:19.046 [2024-11-27 00:51:55.805989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3368.137 ms 00:29:19.046 [2024-11-27 00:51:55.806005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.815660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.815701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:19.046 [2024-11-27 00:51:55.815712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.560 ms 00:29:19.046 [2024-11-27 00:51:55.815721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.815777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.815787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:19.046 [2024-11-27 00:51:55.815796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:19.046 [2024-11-27 00:51:55.815809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.825873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.825908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:19.046 [2024-11-27 00:51:55.825920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.021 ms 00:29:19.046 [2024-11-27 00:51:55.825928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.825956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.825965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:19.046 [2024-11-27 00:51:55.825977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.046 [2024-11-27 00:51:55.825985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.826405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.826425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:19.046 [2024-11-27 00:51:55.826434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:29:19.046 [2024-11-27 00:51:55.826443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.046 [2024-11-27 00:51:55.826498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.046 [2024-11-27 00:51:55.826510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:19.046 [2024-11-27 00:51:55.826522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:19.046 [2024-11-27 00:51:55.826535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.833290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.833325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:19.307 [2024-11-27 00:51:55.833334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.733 ms 00:29:19.307 [2024-11-27 00:51:55.833349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.843978] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:19.307 [2024-11-27 00:51:55.844031] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:19.307 [2024-11-27 00:51:55.844052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.844064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:19.307 [2024-11-27 00:51:55.844077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.615 ms 00:29:19.307 [2024-11-27 00:51:55.844087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.849438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.849480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:19.307 [2024-11-27 00:51:55.849494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.294 ms 00:29:19.307 [2024-11-27 00:51:55.849504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.852053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.852093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:19.307 [2024-11-27 00:51:55.852105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.493 ms 00:29:19.307 [2024-11-27 00:51:55.852115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.854206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.854247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:19.307 [2024-11-27 00:51:55.854257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.041 ms 00:29:19.307 [2024-11-27 00:51:55.854266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.854619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.854633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:19.307 [2024-11-27 00:51:55.854642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:29:19.307 [2024-11-27 00:51:55.854651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.876377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.876588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:19.307 [2024-11-27 00:51:55.876609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.705 ms 00:29:19.307 [2024-11-27 00:51:55.876626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.893312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:19.307 [2024-11-27 00:51:55.894229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.894268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:19.307 [2024-11-27 00:51:55.894280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.191 ms 00:29:19.307 [2024-11-27 00:51:55.894289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.894378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.894391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:19.307 [2024-11-27 00:51:55.894405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:19.307 [2024-11-27 00:51:55.894413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.894471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.894488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:19.307 [2024-11-27 00:51:55.894497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:19.307 [2024-11-27 00:51:55.894504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.894529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.894538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:19.307 [2024-11-27 00:51:55.894547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:19.307 [2024-11-27 00:51:55.894561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.894595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:19.307 [2024-11-27 00:51:55.894606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.894614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:19.307 [2024-11-27 00:51:55.894625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:19.307 [2024-11-27 00:51:55.894633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.307 [2024-11-27 00:51:55.898881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.307 [2024-11-27 00:51:55.898919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:19.307 [2024-11-27 00:51:55.898930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.224 ms 00:29:19.308 [2024-11-27 00:51:55.898938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.308 [2024-11-27 00:51:55.899023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.308 [2024-11-27 00:51:55.899033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:19.308 [2024-11-27 00:51:55.899042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:19.308 [2024-11-27 00:51:55.899053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.308 [2024-11-27 00:51:55.900103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3472.210 ms, result 0 00:29:19.308 [2024-11-27 00:51:55.915262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.308 [2024-11-27 00:51:55.931265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:19.308 [2024-11-27 00:51:55.939417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:19.568 [2024-11-27 00:51:56.291570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.568 [2024-11-27 00:51:56.291619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:19.568 [2024-11-27 00:51:56.291634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:19.568 [2024-11-27 00:51:56.291649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.568 [2024-11-27 00:51:56.291672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.568 [2024-11-27 00:51:56.291682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:19.568 [2024-11-27 00:51:56.291694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:19.568 [2024-11-27 00:51:56.291702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.568 [2024-11-27 00:51:56.291723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.568 [2024-11-27 00:51:56.291731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:19.568 [2024-11-27 00:51:56.291740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:19.568 [2024-11-27 00:51:56.291747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.568 [2024-11-27 00:51:56.291810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.228 ms, result 0 00:29:19.568 true 00:29:19.568 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:19.828 { 00:29:19.828 "name": "ftl", 00:29:19.828 "properties": [ 00:29:19.828 { 00:29:19.828 "name": "superblock_version", 00:29:19.828 "value": 5, 00:29:19.828 "read-only": true 00:29:19.828 }, 00:29:19.828 { 00:29:19.828 "name": "base_device", 00:29:19.828 "bands": [ 00:29:19.828 { 00:29:19.828 "id": 0, 00:29:19.828 "state": "CLOSED", 00:29:19.828 "validity": 1.0 00:29:19.828 }, 00:29:19.828 { 00:29:19.828 "id": 1, 00:29:19.828 "state": "CLOSED", 00:29:19.828 "validity": 1.0 00:29:19.828 }, 00:29:19.828 { 00:29:19.828 "id": 2, 00:29:19.829 "state": "CLOSED", 00:29:19.829 "validity": 0.007843137254901933 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 3, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 4, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 5, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 6, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 7, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 8, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 9, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 10, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 11, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 12, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 13, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 14, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 15, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 16, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 17, 00:29:19.829 "state": "FREE", 00:29:19.829 "validity": 0.0 00:29:19.829 } 00:29:19.829 ], 00:29:19.829 "read-only": true 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "name": "cache_device", 00:29:19.829 "type": "bdev", 00:29:19.829 "chunks": [ 00:29:19.829 { 00:29:19.829 "id": 0, 00:29:19.829 "state": "INACTIVE", 00:29:19.829 "utilization": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 1, 00:29:19.829 "state": "OPEN", 00:29:19.829 "utilization": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 2, 00:29:19.829 "state": "OPEN", 00:29:19.829 "utilization": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 3, 00:29:19.829 "state": "FREE", 00:29:19.829 "utilization": 0.0 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "id": 4, 00:29:19.829 "state": "FREE", 00:29:19.829 "utilization": 0.0 00:29:19.829 } 00:29:19.829 ], 00:29:19.829 "read-only": true 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "name": "verbose_mode", 00:29:19.829 "value": true, 00:29:19.829 "unit": "", 00:29:19.829 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:19.829 }, 00:29:19.829 { 00:29:19.829 "name": "prep_upgrade_on_shutdown", 00:29:19.829 "value": false, 00:29:19.829 "unit": "", 00:29:19.829 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:19.829 } 00:29:19.829 ] 00:29:19.829 } 00:29:19.829 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:19.829 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:19.829 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:20.089 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:20.089 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:20.089 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:20.089 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:20.089 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:20.350 Validate MD5 checksum, iteration 1 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:20.350 00:51:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:20.350 [2024-11-27 00:51:57.047602] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:20.350 [2024-11-27 00:51:57.048006] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94359 ] 00:29:20.610 [2024-11-27 00:51:57.209072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.610 [2024-11-27 00:51:57.249917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.994  [2024-11-27T00:51:59.719Z] Copying: 510/1024 [MB] (510 MBps) [2024-11-27T00:52:00.289Z] Copying: 1024/1024 [MB] (average 542 MBps) 00:29:23.502 00:29:23.502 00:52:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:23.502 00:52:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:26.072 Validate MD5 checksum, iteration 2 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=68c77f7ba89855816859ae27ab57a0c5 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 68c77f7ba89855816859ae27ab57a0c5 != \6\8\c\7\7\f\7\b\a\8\9\8\5\5\8\1\6\8\5\9\a\e\2\7\a\b\5\7\a\0\c\5 ]] 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:26.072 00:52:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:26.072 [2024-11-27 00:52:02.468873] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:26.072 [2024-11-27 00:52:02.468976] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94419 ] 00:29:26.072 [2024-11-27 00:52:02.622484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.072 [2024-11-27 00:52:02.644143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.447  [2024-11-27T00:52:04.493Z] Copying: 703/1024 [MB] (703 MBps) [2024-11-27T00:52:05.060Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:29:28.273 00:29:28.273 00:52:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:28.273 00:52:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f48933a22f2455048bf196e7b9db50f8 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f48933a22f2455048bf196e7b9db50f8 != \f\4\8\9\3\3\a\2\2\f\2\4\5\5\0\4\8\b\f\1\9\6\e\7\b\9\d\b\5\0\f\8 ]] 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94289 ]] 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94289 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94465 00:29:30.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94465 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94465 ']' 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.182 00:52:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.444 [2024-11-27 00:52:06.981980] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:30.444 [2024-11-27 00:52:06.982092] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94465 ] 00:29:30.444 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94289 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:30.444 [2024-11-27 00:52:07.143747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.444 [2024-11-27 00:52:07.163467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.706 [2024-11-27 00:52:07.456932] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:30.706 [2024-11-27 00:52:07.457290] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:30.968 [2024-11-27 00:52:07.609867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.609910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:30.968 [2024-11-27 00:52:07.609923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:30.968 [2024-11-27 00:52:07.609931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.609986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.609998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:30.968 [2024-11-27 00:52:07.610006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:30.968 [2024-11-27 00:52:07.610014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.610034] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:30.968 [2024-11-27 00:52:07.610274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:30.968 [2024-11-27 00:52:07.610290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.610298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:30.968 [2024-11-27 00:52:07.610306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:29:30.968 [2024-11-27 00:52:07.610312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.610603] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:30.968 [2024-11-27 00:52:07.614837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.614892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:30.968 [2024-11-27 00:52:07.614902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.236 ms 00:29:30.968 [2024-11-27 00:52:07.614910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.615947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.615981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:30.968 [2024-11-27 00:52:07.615991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:30.968 [2024-11-27 00:52:07.616001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.616258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.616270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:30.968 [2024-11-27 00:52:07.616279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:29:30.968 [2024-11-27 00:52:07.616286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.616318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.616326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:30.968 [2024-11-27 00:52:07.616334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:30.968 [2024-11-27 00:52:07.616341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.616364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.616377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:30.968 [2024-11-27 00:52:07.616386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:30.968 [2024-11-27 00:52:07.616394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.616418] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:30.968 [2024-11-27 00:52:07.617308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.617323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:30.968 [2024-11-27 00:52:07.617332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.899 ms 00:29:30.968 [2024-11-27 00:52:07.617340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.617368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.617376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:30.968 [2024-11-27 00:52:07.617383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:30.968 [2024-11-27 00:52:07.617390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.617411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:30.968 [2024-11-27 00:52:07.617429] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:30.968 [2024-11-27 00:52:07.617462] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:30.968 [2024-11-27 00:52:07.617488] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:30.968 [2024-11-27 00:52:07.617589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:30.968 [2024-11-27 00:52:07.617607] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:30.968 [2024-11-27 00:52:07.617617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:30.968 [2024-11-27 00:52:07.617627] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:30.968 [2024-11-27 00:52:07.617636] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:30.968 [2024-11-27 00:52:07.617645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:30.968 [2024-11-27 00:52:07.617652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:30.968 [2024-11-27 00:52:07.617663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:30.968 [2024-11-27 00:52:07.617684] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:30.968 [2024-11-27 00:52:07.617692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.968 [2024-11-27 00:52:07.617702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:30.968 [2024-11-27 00:52:07.617709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:29:30.968 [2024-11-27 00:52:07.617716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.968 [2024-11-27 00:52:07.617804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.617811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:30.969 [2024-11-27 00:52:07.617822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:29:30.969 [2024-11-27 00:52:07.617833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.617979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:30.969 [2024-11-27 00:52:07.617992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:30.969 [2024-11-27 00:52:07.618005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:30.969 [2024-11-27 00:52:07.618030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:30.969 [2024-11-27 00:52:07.618048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:30.969 [2024-11-27 00:52:07.618055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:30.969 [2024-11-27 00:52:07.618063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:30.969 [2024-11-27 00:52:07.618085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:30.969 [2024-11-27 00:52:07.618093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:30.969 [2024-11-27 00:52:07.618113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:30.969 [2024-11-27 00:52:07.618120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:30.969 [2024-11-27 00:52:07.618134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:30.969 [2024-11-27 00:52:07.618141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:30.969 [2024-11-27 00:52:07.618158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:30.969 [2024-11-27 00:52:07.618180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:30.969 [2024-11-27 00:52:07.618201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:30.969 [2024-11-27 00:52:07.618225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:30.969 [2024-11-27 00:52:07.618249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:30.969 [2024-11-27 00:52:07.618271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:30.969 [2024-11-27 00:52:07.618292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:30.969 [2024-11-27 00:52:07.618315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:30.969 [2024-11-27 00:52:07.618322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618331] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:30.969 [2024-11-27 00:52:07.618340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:30.969 [2024-11-27 00:52:07.618350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:30.969 [2024-11-27 00:52:07.618366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:30.969 [2024-11-27 00:52:07.618374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:30.969 [2024-11-27 00:52:07.618382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:30.969 [2024-11-27 00:52:07.618389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:30.969 [2024-11-27 00:52:07.618396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:30.969 [2024-11-27 00:52:07.618404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:30.969 [2024-11-27 00:52:07.618412] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:30.969 [2024-11-27 00:52:07.618422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:30.969 [2024-11-27 00:52:07.618440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:30.969 [2024-11-27 00:52:07.618466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:30.969 [2024-11-27 00:52:07.618474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:30.969 [2024-11-27 00:52:07.618486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:30.969 [2024-11-27 00:52:07.618493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:30.969 [2024-11-27 00:52:07.618544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:30.969 [2024-11-27 00:52:07.618551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:30.969 [2024-11-27 00:52:07.618566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:30.969 [2024-11-27 00:52:07.618572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:30.969 [2024-11-27 00:52:07.618579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:30.969 [2024-11-27 00:52:07.618587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.618596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:30.969 [2024-11-27 00:52:07.618606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.676 ms 00:29:30.969 [2024-11-27 00:52:07.618613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.625641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.625680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:30.969 [2024-11-27 00:52:07.625690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.982 ms 00:29:30.969 [2024-11-27 00:52:07.625699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.625732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.625740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:30.969 [2024-11-27 00:52:07.625750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:30.969 [2024-11-27 00:52:07.625757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.634834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.634878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:30.969 [2024-11-27 00:52:07.634892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.034 ms 00:29:30.969 [2024-11-27 00:52:07.634900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.634925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.634935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:30.969 [2024-11-27 00:52:07.634943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:30.969 [2024-11-27 00:52:07.634952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.969 [2024-11-27 00:52:07.635032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.969 [2024-11-27 00:52:07.635043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:30.970 [2024-11-27 00:52:07.635054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:30.970 [2024-11-27 00:52:07.635061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.635101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.635110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:30.970 [2024-11-27 00:52:07.635125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:30.970 [2024-11-27 00:52:07.635131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.640822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.640873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:30.970 [2024-11-27 00:52:07.640883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.670 ms 00:29:30.970 [2024-11-27 00:52:07.640890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.640976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.640988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:30.970 [2024-11-27 00:52:07.640998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:30.970 [2024-11-27 00:52:07.641006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.653171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.653211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:30.970 [2024-11-27 00:52:07.653225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.147 ms 00:29:30.970 [2024-11-27 00:52:07.653233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.654510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.654552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:30.970 [2024-11-27 00:52:07.654568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:29:30.970 [2024-11-27 00:52:07.654577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.672307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.672493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:30.970 [2024-11-27 00:52:07.672515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.685 ms 00:29:30.970 [2024-11-27 00:52:07.672524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.672634] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:30.970 [2024-11-27 00:52:07.672717] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:30.970 [2024-11-27 00:52:07.672796] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:30.970 [2024-11-27 00:52:07.672892] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:30.970 [2024-11-27 00:52:07.672902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.672915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:30.970 [2024-11-27 00:52:07.672926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:29:30.970 [2024-11-27 00:52:07.672934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.672979] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:30.970 [2024-11-27 00:52:07.672995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.673003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:30.970 [2024-11-27 00:52:07.673011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:30.970 [2024-11-27 00:52:07.673019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.676518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.676553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:30.970 [2024-11-27 00:52:07.676568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.474 ms 00:29:30.970 [2024-11-27 00:52:07.676576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.677317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.677346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:30.970 [2024-11-27 00:52:07.677357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:30.970 [2024-11-27 00:52:07.677365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.970 [2024-11-27 00:52:07.677435] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:30.970 [2024-11-27 00:52:07.677583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.970 [2024-11-27 00:52:07.677594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:30.970 [2024-11-27 00:52:07.677607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.149 ms 00:29:30.970 [2024-11-27 00:52:07.677618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.914 [2024-11-27 00:52:08.362150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.914 [2024-11-27 00:52:08.362222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:31.914 [2024-11-27 00:52:08.362239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 684.206 ms 00:29:31.914 [2024-11-27 00:52:08.362258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.914 [2024-11-27 00:52:08.364475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.914 [2024-11-27 00:52:08.364530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:31.915 [2024-11-27 00:52:08.364542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.672 ms 00:29:31.915 [2024-11-27 00:52:08.364551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.915 [2024-11-27 00:52:08.365534] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:31.915 [2024-11-27 00:52:08.365583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.915 [2024-11-27 00:52:08.365593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:31.915 [2024-11-27 00:52:08.365616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:29:31.915 [2024-11-27 00:52:08.365627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.915 [2024-11-27 00:52:08.365682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.915 [2024-11-27 00:52:08.365697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:31.915 [2024-11-27 00:52:08.365708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:31.915 [2024-11-27 00:52:08.365718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.915 [2024-11-27 00:52:08.365754] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 688.315 ms, result 0 00:29:31.915 [2024-11-27 00:52:08.365801] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:31.915 [2024-11-27 00:52:08.365955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:31.915 [2024-11-27 00:52:08.365969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:31.915 [2024-11-27 00:52:08.365978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.154 ms 00:29:31.915 [2024-11-27 00:52:08.365987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.488 [2024-11-27 00:52:09.152194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.488 [2024-11-27 00:52:09.152265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:32.488 [2024-11-27 00:52:09.152280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 785.652 ms 00:29:32.488 [2024-11-27 00:52:09.152288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.488 [2024-11-27 00:52:09.155051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.488 [2024-11-27 00:52:09.155106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:32.488 [2024-11-27 00:52:09.155117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.150 ms 00:29:32.488 [2024-11-27 00:52:09.155125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.488 [2024-11-27 00:52:09.156429] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:32.488 [2024-11-27 00:52:09.156524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.488 [2024-11-27 00:52:09.156535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:32.488 [2024-11-27 00:52:09.156545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.365 ms 00:29:32.488 [2024-11-27 00:52:09.156552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.488 [2024-11-27 00:52:09.156596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.488 [2024-11-27 00:52:09.156605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:32.489 [2024-11-27 00:52:09.156614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:32.489 [2024-11-27 00:52:09.156622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.156660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 790.852 ms, result 0 00:29:32.489 [2024-11-27 00:52:09.156706] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:32.489 [2024-11-27 00:52:09.156719] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:32.489 [2024-11-27 00:52:09.156731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.156738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:32.489 [2024-11-27 00:52:09.156752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1479.309 ms 00:29:32.489 [2024-11-27 00:52:09.156760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.156789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.156798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:32.489 [2024-11-27 00:52:09.156807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:32.489 [2024-11-27 00:52:09.156815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.165937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:32.489 [2024-11-27 00:52:09.166076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.166094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:32.489 [2024-11-27 00:52:09.166104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.244 ms 00:29:32.489 [2024-11-27 00:52:09.166112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.166821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.166875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:32.489 [2024-11-27 00:52:09.166887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:29:32.489 [2024-11-27 00:52:09.166897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:32.489 [2024-11-27 00:52:09.169174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.200 ms 00:29:32.489 [2024-11-27 00:52:09.169183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:32.489 [2024-11-27 00:52:09.169250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:32.489 [2024-11-27 00:52:09.169260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:32.489 [2024-11-27 00:52:09.169401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:32.489 [2024-11-27 00:52:09.169410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:32.489 [2024-11-27 00:52:09.169457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:32.489 [2024-11-27 00:52:09.169473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169508] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:32.489 [2024-11-27 00:52:09.169520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:32.489 [2024-11-27 00:52:09.169543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:32.489 [2024-11-27 00:52:09.169550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.169611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:32.489 [2024-11-27 00:52:09.169622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:32.489 [2024-11-27 00:52:09.169632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:29:32.489 [2024-11-27 00:52:09.169641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:32.489 [2024-11-27 00:52:09.170842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1560.533 ms, result 0 00:29:32.489 [2024-11-27 00:52:09.186483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.489 [2024-11-27 00:52:09.202481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:32.489 [2024-11-27 00:52:09.210561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:32.750 Validate MD5 checksum, iteration 1 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:32.750 00:52:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:33.011 [2024-11-27 00:52:09.582131] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:33.011 [2024-11-27 00:52:09.582380] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94504 ] 00:29:33.011 [2024-11-27 00:52:09.740792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.011 [2024-11-27 00:52:09.775525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.395  [2024-11-27T00:52:12.122Z] Copying: 568/1024 [MB] (568 MBps) [2024-11-27T00:52:13.508Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:29:36.721 00:29:36.721 00:52:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:36.721 00:52:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:38.637 Validate MD5 checksum, iteration 2 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=68c77f7ba89855816859ae27ab57a0c5 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 68c77f7ba89855816859ae27ab57a0c5 != \6\8\c\7\7\f\7\b\a\8\9\8\5\5\8\1\6\8\5\9\a\e\2\7\a\b\5\7\a\0\c\5 ]] 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:38.637 00:52:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:38.637 [2024-11-27 00:52:15.382130] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:38.637 [2024-11-27 00:52:15.382448] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94567 ] 00:29:38.898 [2024-11-27 00:52:15.539937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.898 [2024-11-27 00:52:15.563920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.285  [2024-11-27T00:52:18.018Z] Copying: 603/1024 [MB] (603 MBps) [2024-11-27T00:52:20.561Z] Copying: 1024/1024 [MB] (average 582 MBps) 00:29:43.774 00:29:43.774 00:52:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:43.774 00:52:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f48933a22f2455048bf196e7b9db50f8 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f48933a22f2455048bf196e7b9db50f8 != \f\4\8\9\3\3\a\2\2\f\2\4\5\5\0\4\8\b\f\1\9\6\e\7\b\9\d\b\5\0\f\8 ]] 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94465 ]] 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94465 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94465 ']' 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94465 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94465 00:29:46.315 killing process with pid 94465 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94465' 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94465 00:29:46.315 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94465 00:29:46.315 [2024-11-27 00:52:22.745916] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:46.315 [2024-11-27 00:52:22.750163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.750194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:46.315 [2024-11-27 00:52:22.750204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:46.315 [2024-11-27 00:52:22.750211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.750228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:46.315 [2024-11-27 00:52:22.750587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.750604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:46.315 [2024-11-27 00:52:22.750611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:29:46.315 [2024-11-27 00:52:22.750617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.750797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.750806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:46.315 [2024-11-27 00:52:22.750812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:29:46.315 [2024-11-27 00:52:22.750818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.752010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.752034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:46.315 [2024-11-27 00:52:22.752041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.180 ms 00:29:46.315 [2024-11-27 00:52:22.752050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.752930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.752957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:46.315 [2024-11-27 00:52:22.752964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:29:46.315 [2024-11-27 00:52:22.752970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.755196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.755224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:46.315 [2024-11-27 00:52:22.755235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.200 ms 00:29:46.315 [2024-11-27 00:52:22.755241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.756627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.756652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:46.315 [2024-11-27 00:52:22.756660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.361 ms 00:29:46.315 [2024-11-27 00:52:22.756666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.756722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.756734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:46.315 [2024-11-27 00:52:22.756740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:46.315 [2024-11-27 00:52:22.756748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.758153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.758293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:46.315 [2024-11-27 00:52:22.758306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.393 ms 00:29:46.315 [2024-11-27 00:52:22.758312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.759607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.759633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:46.315 [2024-11-27 00:52:22.759640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.271 ms 00:29:46.315 [2024-11-27 00:52:22.759646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.761541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.761627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:46.315 [2024-11-27 00:52:22.761638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.822 ms 00:29:46.315 [2024-11-27 00:52:22.761652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.763697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.315 [2024-11-27 00:52:22.763722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:46.315 [2024-11-27 00:52:22.763728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.993 ms 00:29:46.315 [2024-11-27 00:52:22.763734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.315 [2024-11-27 00:52:22.763757] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:46.315 [2024-11-27 00:52:22.763767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:46.315 [2024-11-27 00:52:22.763775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:46.315 [2024-11-27 00:52:22.763782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:46.315 [2024-11-27 00:52:22.763788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.315 [2024-11-27 00:52:22.763885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:46.315 [2024-11-27 00:52:22.763890] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a1e7c5cb-6d2c-4bf8-94ed-10f405d05c08 00:29:46.315 [2024-11-27 00:52:22.763897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:46.315 [2024-11-27 00:52:22.763903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:46.315 [2024-11-27 00:52:22.763909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:46.315 [2024-11-27 00:52:22.763915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:46.315 [2024-11-27 00:52:22.763920] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:46.315 [2024-11-27 00:52:22.763926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:46.316 [2024-11-27 00:52:22.763935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:46.316 [2024-11-27 00:52:22.763940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:46.316 [2024-11-27 00:52:22.763946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:46.316 [2024-11-27 00:52:22.763952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.316 [2024-11-27 00:52:22.763958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:46.316 [2024-11-27 00:52:22.763964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:29:46.316 [2024-11-27 00:52:22.763969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.765257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.316 [2024-11-27 00:52:22.765332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:46.316 [2024-11-27 00:52:22.765369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.275 ms 00:29:46.316 [2024-11-27 00:52:22.765387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.765468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.316 [2024-11-27 00:52:22.765486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:46.316 [2024-11-27 00:52:22.765502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:46.316 [2024-11-27 00:52:22.765516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.769908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.769994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:46.316 [2024-11-27 00:52:22.770033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.770054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.770086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.770103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:46.316 [2024-11-27 00:52:22.770117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.770136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.770206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.770227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:46.316 [2024-11-27 00:52:22.770248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.770290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.770320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.770336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:46.316 [2024-11-27 00:52:22.770352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.770366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.778103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.778211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:46.316 [2024-11-27 00:52:22.778249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.778266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:46.316 [2024-11-27 00:52:22.784276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:46.316 [2024-11-27 00:52:22.784366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:46.316 [2024-11-27 00:52:22.784472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:46.316 [2024-11-27 00:52:22.784763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:46.316 [2024-11-27 00:52:22.784862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.784922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.784940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:46.316 [2024-11-27 00:52:22.784981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.784999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.785042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:46.316 [2024-11-27 00:52:22.785064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:46.316 [2024-11-27 00:52:22.785079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:46.316 [2024-11-27 00:52:22.785094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.316 [2024-11-27 00:52:22.785200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 35.012 ms, result 0 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:46.316 Remove shared memory files 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94289 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:46.316 ************************************ 00:29:46.316 END TEST ftl_upgrade_shutdown 00:29:46.316 ************************************ 00:29:46.316 00:29:46.316 real 1m13.667s 00:29:46.316 user 1m37.975s 00:29:46.316 sys 0m21.368s 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.316 00:52:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:46.316 00:52:22 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:46.316 00:52:22 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:46.316 00:52:22 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:46.316 00:52:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.316 00:52:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:46.316 ************************************ 00:29:46.316 START TEST ftl_restore_fast 00:29:46.316 ************************************ 00:29:46.316 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:46.316 * Looking for test storage... 00:29:46.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:46.316 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.316 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.316 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.578 --rc genhtml_branch_coverage=1 00:29:46.578 --rc genhtml_function_coverage=1 00:29:46.578 --rc genhtml_legend=1 00:29:46.578 --rc geninfo_all_blocks=1 00:29:46.578 --rc geninfo_unexecuted_blocks=1 00:29:46.578 00:29:46.578 ' 00:29:46.578 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.578 --rc genhtml_branch_coverage=1 00:29:46.578 --rc genhtml_function_coverage=1 00:29:46.578 --rc genhtml_legend=1 00:29:46.578 --rc geninfo_all_blocks=1 00:29:46.578 --rc geninfo_unexecuted_blocks=1 00:29:46.579 00:29:46.579 ' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.579 --rc genhtml_branch_coverage=1 00:29:46.579 --rc genhtml_function_coverage=1 00:29:46.579 --rc genhtml_legend=1 00:29:46.579 --rc geninfo_all_blocks=1 00:29:46.579 --rc geninfo_unexecuted_blocks=1 00:29:46.579 00:29:46.579 ' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.579 --rc genhtml_branch_coverage=1 00:29:46.579 --rc genhtml_function_coverage=1 00:29:46.579 --rc genhtml_legend=1 00:29:46.579 --rc geninfo_all_blocks=1 00:29:46.579 --rc geninfo_unexecuted_blocks=1 00:29:46.579 00:29:46.579 ' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.VNeYxoYwlT 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94722 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94722 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94722 ']' 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.579 00:52:23 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:46.579 [2024-11-27 00:52:23.277683] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:29:46.579 [2024-11-27 00:52:23.277816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94722 ] 00:29:46.840 [2024-11-27 00:52:23.428429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.840 [2024-11-27 00:52:23.444837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.412 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.412 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:47.412 00:52:24 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:47.412 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:47.412 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:47.413 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:47.413 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:47.413 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:47.673 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:47.933 { 00:29:47.933 "name": "nvme0n1", 00:29:47.933 "aliases": [ 00:29:47.933 "04b295e1-71cb-4193-ab6f-1c4d07aa6504" 00:29:47.933 ], 00:29:47.933 "product_name": "NVMe disk", 00:29:47.933 "block_size": 4096, 00:29:47.933 "num_blocks": 1310720, 00:29:47.933 "uuid": "04b295e1-71cb-4193-ab6f-1c4d07aa6504", 00:29:47.933 "numa_id": -1, 00:29:47.933 "assigned_rate_limits": { 00:29:47.933 "rw_ios_per_sec": 0, 00:29:47.933 "rw_mbytes_per_sec": 0, 00:29:47.933 "r_mbytes_per_sec": 0, 00:29:47.933 "w_mbytes_per_sec": 0 00:29:47.933 }, 00:29:47.933 "claimed": true, 00:29:47.933 "claim_type": "read_many_write_one", 00:29:47.933 "zoned": false, 00:29:47.933 "supported_io_types": { 00:29:47.933 "read": true, 00:29:47.933 "write": true, 00:29:47.933 "unmap": true, 00:29:47.933 "flush": true, 00:29:47.933 "reset": true, 00:29:47.933 "nvme_admin": true, 00:29:47.933 "nvme_io": true, 00:29:47.933 "nvme_io_md": false, 00:29:47.933 "write_zeroes": true, 00:29:47.933 "zcopy": false, 00:29:47.933 "get_zone_info": false, 00:29:47.933 "zone_management": false, 00:29:47.933 "zone_append": false, 00:29:47.933 "compare": true, 00:29:47.933 "compare_and_write": false, 00:29:47.933 "abort": true, 00:29:47.933 "seek_hole": false, 00:29:47.933 "seek_data": false, 00:29:47.933 "copy": true, 00:29:47.933 "nvme_iov_md": false 00:29:47.933 }, 00:29:47.933 "driver_specific": { 00:29:47.933 "nvme": [ 00:29:47.933 { 00:29:47.933 "pci_address": "0000:00:11.0", 00:29:47.933 "trid": { 00:29:47.933 "trtype": "PCIe", 00:29:47.933 "traddr": "0000:00:11.0" 00:29:47.933 }, 00:29:47.933 "ctrlr_data": { 00:29:47.933 "cntlid": 0, 00:29:47.933 "vendor_id": "0x1b36", 00:29:47.933 "model_number": "QEMU NVMe Ctrl", 00:29:47.933 "serial_number": "12341", 00:29:47.933 "firmware_revision": "8.0.0", 00:29:47.933 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:47.933 "oacs": { 00:29:47.933 "security": 0, 00:29:47.933 "format": 1, 00:29:47.933 "firmware": 0, 00:29:47.933 "ns_manage": 1 00:29:47.933 }, 00:29:47.933 "multi_ctrlr": false, 00:29:47.933 "ana_reporting": false 00:29:47.933 }, 00:29:47.933 "vs": { 00:29:47.933 "nvme_version": "1.4" 00:29:47.933 }, 00:29:47.933 "ns_data": { 00:29:47.933 "id": 1, 00:29:47.933 "can_share": false 00:29:47.933 } 00:29:47.933 } 00:29:47.933 ], 00:29:47.933 "mp_policy": "active_passive" 00:29:47.933 } 00:29:47.933 } 00:29:47.933 ]' 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:47.933 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:48.192 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=bdb786d7-a5a4-4119-8283-ad6a256512d1 00:29:48.192 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:48.192 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bdb786d7-a5a4-4119-8283-ad6a256512d1 00:29:48.454 00:52:24 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=25c61368-91ef-489f-bf8d-b54877d076fb 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 25c61368-91ef-489f-bf8d-b54877d076fb 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:48.715 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:48.977 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:48.977 { 00:29:48.977 "name": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:48.977 "aliases": [ 00:29:48.977 "lvs/nvme0n1p0" 00:29:48.977 ], 00:29:48.977 "product_name": "Logical Volume", 00:29:48.977 "block_size": 4096, 00:29:48.977 "num_blocks": 26476544, 00:29:48.977 "uuid": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:48.977 "assigned_rate_limits": { 00:29:48.977 "rw_ios_per_sec": 0, 00:29:48.977 "rw_mbytes_per_sec": 0, 00:29:48.977 "r_mbytes_per_sec": 0, 00:29:48.977 "w_mbytes_per_sec": 0 00:29:48.977 }, 00:29:48.977 "claimed": false, 00:29:48.977 "zoned": false, 00:29:48.977 "supported_io_types": { 00:29:48.977 "read": true, 00:29:48.978 "write": true, 00:29:48.978 "unmap": true, 00:29:48.978 "flush": false, 00:29:48.978 "reset": true, 00:29:48.978 "nvme_admin": false, 00:29:48.978 "nvme_io": false, 00:29:48.978 "nvme_io_md": false, 00:29:48.978 "write_zeroes": true, 00:29:48.978 "zcopy": false, 00:29:48.978 "get_zone_info": false, 00:29:48.978 "zone_management": false, 00:29:48.978 "zone_append": false, 00:29:48.978 "compare": false, 00:29:48.978 "compare_and_write": false, 00:29:48.978 "abort": false, 00:29:48.978 "seek_hole": true, 00:29:48.978 "seek_data": true, 00:29:48.978 "copy": false, 00:29:48.978 "nvme_iov_md": false 00:29:48.978 }, 00:29:48.978 "driver_specific": { 00:29:48.978 "lvol": { 00:29:48.978 "lvol_store_uuid": "25c61368-91ef-489f-bf8d-b54877d076fb", 00:29:48.978 "base_bdev": "nvme0n1", 00:29:48.978 "thin_provision": true, 00:29:48.978 "num_allocated_clusters": 0, 00:29:48.978 "snapshot": false, 00:29:48.978 "clone": false, 00:29:48.978 "esnap_clone": false 00:29:48.978 } 00:29:48.978 } 00:29:48.978 } 00:29:48.978 ]' 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:48.978 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:49.239 00:52:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:49.501 { 00:29:49.501 "name": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:49.501 "aliases": [ 00:29:49.501 "lvs/nvme0n1p0" 00:29:49.501 ], 00:29:49.501 "product_name": "Logical Volume", 00:29:49.501 "block_size": 4096, 00:29:49.501 "num_blocks": 26476544, 00:29:49.501 "uuid": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:49.501 "assigned_rate_limits": { 00:29:49.501 "rw_ios_per_sec": 0, 00:29:49.501 "rw_mbytes_per_sec": 0, 00:29:49.501 "r_mbytes_per_sec": 0, 00:29:49.501 "w_mbytes_per_sec": 0 00:29:49.501 }, 00:29:49.501 "claimed": false, 00:29:49.501 "zoned": false, 00:29:49.501 "supported_io_types": { 00:29:49.501 "read": true, 00:29:49.501 "write": true, 00:29:49.501 "unmap": true, 00:29:49.501 "flush": false, 00:29:49.501 "reset": true, 00:29:49.501 "nvme_admin": false, 00:29:49.501 "nvme_io": false, 00:29:49.501 "nvme_io_md": false, 00:29:49.501 "write_zeroes": true, 00:29:49.501 "zcopy": false, 00:29:49.501 "get_zone_info": false, 00:29:49.501 "zone_management": false, 00:29:49.501 "zone_append": false, 00:29:49.501 "compare": false, 00:29:49.501 "compare_and_write": false, 00:29:49.501 "abort": false, 00:29:49.501 "seek_hole": true, 00:29:49.501 "seek_data": true, 00:29:49.501 "copy": false, 00:29:49.501 "nvme_iov_md": false 00:29:49.501 }, 00:29:49.501 "driver_specific": { 00:29:49.501 "lvol": { 00:29:49.501 "lvol_store_uuid": "25c61368-91ef-489f-bf8d-b54877d076fb", 00:29:49.501 "base_bdev": "nvme0n1", 00:29:49.501 "thin_provision": true, 00:29:49.501 "num_allocated_clusters": 0, 00:29:49.501 "snapshot": false, 00:29:49.501 "clone": false, 00:29:49.501 "esnap_clone": false 00:29:49.501 } 00:29:49.501 } 00:29:49.501 } 00:29:49.501 ]' 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:49.501 00:52:26 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:49.763 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d 00:29:50.025 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:50.025 { 00:29:50.025 "name": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:50.025 "aliases": [ 00:29:50.025 "lvs/nvme0n1p0" 00:29:50.025 ], 00:29:50.025 "product_name": "Logical Volume", 00:29:50.025 "block_size": 4096, 00:29:50.025 "num_blocks": 26476544, 00:29:50.025 "uuid": "9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d", 00:29:50.025 "assigned_rate_limits": { 00:29:50.025 "rw_ios_per_sec": 0, 00:29:50.025 "rw_mbytes_per_sec": 0, 00:29:50.025 "r_mbytes_per_sec": 0, 00:29:50.025 "w_mbytes_per_sec": 0 00:29:50.025 }, 00:29:50.025 "claimed": false, 00:29:50.025 "zoned": false, 00:29:50.025 "supported_io_types": { 00:29:50.025 "read": true, 00:29:50.025 "write": true, 00:29:50.025 "unmap": true, 00:29:50.025 "flush": false, 00:29:50.025 "reset": true, 00:29:50.025 "nvme_admin": false, 00:29:50.025 "nvme_io": false, 00:29:50.025 "nvme_io_md": false, 00:29:50.025 "write_zeroes": true, 00:29:50.025 "zcopy": false, 00:29:50.025 "get_zone_info": false, 00:29:50.025 "zone_management": false, 00:29:50.025 "zone_append": false, 00:29:50.025 "compare": false, 00:29:50.025 "compare_and_write": false, 00:29:50.025 "abort": false, 00:29:50.025 "seek_hole": true, 00:29:50.025 "seek_data": true, 00:29:50.025 "copy": false, 00:29:50.025 "nvme_iov_md": false 00:29:50.025 }, 00:29:50.025 "driver_specific": { 00:29:50.025 "lvol": { 00:29:50.025 "lvol_store_uuid": "25c61368-91ef-489f-bf8d-b54877d076fb", 00:29:50.025 "base_bdev": "nvme0n1", 00:29:50.025 "thin_provision": true, 00:29:50.025 "num_allocated_clusters": 0, 00:29:50.025 "snapshot": false, 00:29:50.025 "clone": false, 00:29:50.025 "esnap_clone": false 00:29:50.025 } 00:29:50.025 } 00:29:50.025 } 00:29:50.026 ]' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d --l2p_dram_limit 10' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:50.026 00:52:26 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9cb21bf6-fe11-4aff-94d4-c80ee8f9e54d --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:50.287 [2024-11-27 00:52:26.930536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.930574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:50.287 [2024-11-27 00:52:26.930584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:50.287 [2024-11-27 00:52:26.930592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.930636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.930647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:50.287 [2024-11-27 00:52:26.930653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:50.287 [2024-11-27 00:52:26.930661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.930675] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:50.287 [2024-11-27 00:52:26.930897] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:50.287 [2024-11-27 00:52:26.930910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.930917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:50.287 [2024-11-27 00:52:26.930923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:29:50.287 [2024-11-27 00:52:26.930930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.930953] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d13e913d-e9f0-4de6-927a-adc927448444 00:29:50.287 [2024-11-27 00:52:26.931952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.931984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:50.287 [2024-11-27 00:52:26.931993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:50.287 [2024-11-27 00:52:26.931999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.936613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.936641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:50.287 [2024-11-27 00:52:26.936651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.545 ms 00:29:50.287 [2024-11-27 00:52:26.936657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.936718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.936724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:50.287 [2024-11-27 00:52:26.936735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:50.287 [2024-11-27 00:52:26.936741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.936778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.936785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:50.287 [2024-11-27 00:52:26.936793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.287 [2024-11-27 00:52:26.936799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.936819] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:50.287 [2024-11-27 00:52:26.938074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.938186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:50.287 [2024-11-27 00:52:26.938198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:29:50.287 [2024-11-27 00:52:26.938205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.938231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.287 [2024-11-27 00:52:26.938239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:50.287 [2024-11-27 00:52:26.938248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:50.287 [2024-11-27 00:52:26.938256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.287 [2024-11-27 00:52:26.938269] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:50.287 [2024-11-27 00:52:26.938384] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:50.287 [2024-11-27 00:52:26.938393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:50.287 [2024-11-27 00:52:26.938403] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:50.287 [2024-11-27 00:52:26.938414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:50.287 [2024-11-27 00:52:26.938422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938431] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:50.288 [2024-11-27 00:52:26.938438] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:50.288 [2024-11-27 00:52:26.938443] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:50.288 [2024-11-27 00:52:26.938450] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:50.288 [2024-11-27 00:52:26.938457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.288 [2024-11-27 00:52:26.938465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:50.288 [2024-11-27 00:52:26.938471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:29:50.288 [2024-11-27 00:52:26.938477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.288 [2024-11-27 00:52:26.938541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.288 [2024-11-27 00:52:26.938550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:50.288 [2024-11-27 00:52:26.938555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:50.288 [2024-11-27 00:52:26.938564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.288 [2024-11-27 00:52:26.938635] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:50.288 [2024-11-27 00:52:26.938644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:50.288 [2024-11-27 00:52:26.938650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:50.288 [2024-11-27 00:52:26.938673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:50.288 [2024-11-27 00:52:26.938692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.288 [2024-11-27 00:52:26.938703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:50.288 [2024-11-27 00:52:26.938710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:50.288 [2024-11-27 00:52:26.938716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:50.288 [2024-11-27 00:52:26.938724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:50.288 [2024-11-27 00:52:26.938729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:50.288 [2024-11-27 00:52:26.938735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:50.288 [2024-11-27 00:52:26.938748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:50.288 [2024-11-27 00:52:26.938766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:50.288 [2024-11-27 00:52:26.938789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:50.288 [2024-11-27 00:52:26.938808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:50.288 [2024-11-27 00:52:26.938829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:50.288 [2024-11-27 00:52:26.938848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.288 [2024-11-27 00:52:26.938872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:50.288 [2024-11-27 00:52:26.938879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:50.288 [2024-11-27 00:52:26.938886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:50.288 [2024-11-27 00:52:26.938894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:50.288 [2024-11-27 00:52:26.938899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:50.288 [2024-11-27 00:52:26.938908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:50.288 [2024-11-27 00:52:26.938930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:50.288 [2024-11-27 00:52:26.938936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938942] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:50.288 [2024-11-27 00:52:26.938949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:50.288 [2024-11-27 00:52:26.938961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:50.288 [2024-11-27 00:52:26.938967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:50.288 [2024-11-27 00:52:26.938976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:50.288 [2024-11-27 00:52:26.938983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:50.288 [2024-11-27 00:52:26.938991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:50.288 [2024-11-27 00:52:26.938996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:50.288 [2024-11-27 00:52:26.939003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:50.288 [2024-11-27 00:52:26.939009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:50.288 [2024-11-27 00:52:26.939019] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:50.288 [2024-11-27 00:52:26.939027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:50.288 [2024-11-27 00:52:26.939044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:50.288 [2024-11-27 00:52:26.939053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:50.288 [2024-11-27 00:52:26.939060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:50.288 [2024-11-27 00:52:26.939067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:50.288 [2024-11-27 00:52:26.939073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:50.288 [2024-11-27 00:52:26.939083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:50.288 [2024-11-27 00:52:26.939089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:50.288 [2024-11-27 00:52:26.939096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:50.288 [2024-11-27 00:52:26.939102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:50.288 [2024-11-27 00:52:26.939137] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:50.288 [2024-11-27 00:52:26.939144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:50.288 [2024-11-27 00:52:26.939160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:50.288 [2024-11-27 00:52:26.939167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:50.288 [2024-11-27 00:52:26.939172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:50.288 [2024-11-27 00:52:26.939178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.288 [2024-11-27 00:52:26.939184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:50.288 [2024-11-27 00:52:26.939192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:29:50.288 [2024-11-27 00:52:26.939197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.288 [2024-11-27 00:52:26.939228] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:50.288 [2024-11-27 00:52:26.939235] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:54.581 [2024-11-27 00:52:30.851645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.581 [2024-11-27 00:52:30.852003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:54.581 [2024-11-27 00:52:30.852038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3912.395 ms 00:29:54.581 [2024-11-27 00:52:30.852049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.581 [2024-11-27 00:52:30.866322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.581 [2024-11-27 00:52:30.866382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.581 [2024-11-27 00:52:30.866401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.143 ms 00:29:54.581 [2024-11-27 00:52:30.866413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.581 [2024-11-27 00:52:30.866548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.581 [2024-11-27 00:52:30.866559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:54.581 [2024-11-27 00:52:30.866571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:54.581 [2024-11-27 00:52:30.866580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.581 [2024-11-27 00:52:30.879724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.581 [2024-11-27 00:52:30.879777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.582 [2024-11-27 00:52:30.879792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.098 ms 00:29:54.582 [2024-11-27 00:52:30.879804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:30.879843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:30.879906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.582 [2024-11-27 00:52:30.879918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:54.582 [2024-11-27 00:52:30.879927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:30.880464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:30.880503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.582 [2024-11-27 00:52:30.880519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:29:54.582 [2024-11-27 00:52:30.880529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:30.880659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:30.880669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.582 [2024-11-27 00:52:30.880682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:29:54.582 [2024-11-27 00:52:30.880696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:30.889558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:30.889606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.582 [2024-11-27 00:52:30.889621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.834 ms 00:29:54.582 [2024-11-27 00:52:30.889649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:30.908312] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:54.582 [2024-11-27 00:52:30.912644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:30.912710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:54.582 [2024-11-27 00:52:30.912726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.917 ms 00:29:54.582 [2024-11-27 00:52:30.912740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.001608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.001701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:54.582 [2024-11-27 00:52:31.001715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.814 ms 00:29:54.582 [2024-11-27 00:52:31.001730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.001961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.001980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:54.582 [2024-11-27 00:52:31.001989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:29:54.582 [2024-11-27 00:52:31.002000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.007803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.008034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:54.582 [2024-11-27 00:52:31.008060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.763 ms 00:29:54.582 [2024-11-27 00:52:31.008071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.013556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.013613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:54.582 [2024-11-27 00:52:31.013625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:29:54.582 [2024-11-27 00:52:31.013665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.014072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.014090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:54.582 [2024-11-27 00:52:31.014100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:29:54.582 [2024-11-27 00:52:31.014113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.062260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.062321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:54.582 [2024-11-27 00:52:31.062337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.085 ms 00:29:54.582 [2024-11-27 00:52:31.062348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.069897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.069949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:54.582 [2024-11-27 00:52:31.069961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.485 ms 00:29:54.582 [2024-11-27 00:52:31.069973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.075987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.076041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:54.582 [2024-11-27 00:52:31.076052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.963 ms 00:29:54.582 [2024-11-27 00:52:31.076062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.082467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.082526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:54.582 [2024-11-27 00:52:31.082538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.357 ms 00:29:54.582 [2024-11-27 00:52:31.082553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.082608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.082629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:54.582 [2024-11-27 00:52:31.082638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:54.582 [2024-11-27 00:52:31.082649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.082742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.582 [2024-11-27 00:52:31.082757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:54.582 [2024-11-27 00:52:31.082770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:54.582 [2024-11-27 00:52:31.082783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.582 [2024-11-27 00:52:31.084093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4153.057 ms, result 0 00:29:54.582 { 00:29:54.582 "name": "ftl0", 00:29:54.582 "uuid": "d13e913d-e9f0-4de6-927a-adc927448444" 00:29:54.582 } 00:29:54.582 00:52:31 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:54.582 00:52:31 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:54.582 00:52:31 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:54.582 00:52:31 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:54.845 [2024-11-27 00:52:31.515246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.515460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:54.845 [2024-11-27 00:52:31.515489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:54.845 [2024-11-27 00:52:31.515498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.515535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.845 [2024-11-27 00:52:31.516299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.516357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:54.845 [2024-11-27 00:52:31.516370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:29:54.845 [2024-11-27 00:52:31.516383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.516676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.516692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:54.845 [2024-11-27 00:52:31.516709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:29:54.845 [2024-11-27 00:52:31.516723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.520013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.520045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:54.845 [2024-11-27 00:52:31.520056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:29:54.845 [2024-11-27 00:52:31.520067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.526325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.526513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:54.845 [2024-11-27 00:52:31.526534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.232 ms 00:29:54.845 [2024-11-27 00:52:31.526549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.529430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.529488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:54.845 [2024-11-27 00:52:31.529499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:29:54.845 [2024-11-27 00:52:31.529510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.845 [2024-11-27 00:52:31.536373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.845 [2024-11-27 00:52:31.536555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:54.845 [2024-11-27 00:52:31.536616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.812 ms 00:29:54.845 [2024-11-27 00:52:31.536643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.536804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.846 [2024-11-27 00:52:31.536841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:54.846 [2024-11-27 00:52:31.536970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:54.846 [2024-11-27 00:52:31.537000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.540713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.846 [2024-11-27 00:52:31.540769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:54.846 [2024-11-27 00:52:31.540780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.353 ms 00:29:54.846 [2024-11-27 00:52:31.540790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.543515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.846 [2024-11-27 00:52:31.543577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:54.846 [2024-11-27 00:52:31.543587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:29:54.846 [2024-11-27 00:52:31.543597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.545688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.846 [2024-11-27 00:52:31.545743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:54.846 [2024-11-27 00:52:31.545754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:29:54.846 [2024-11-27 00:52:31.545764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.548341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.846 [2024-11-27 00:52:31.548509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:54.846 [2024-11-27 00:52:31.548527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:29:54.846 [2024-11-27 00:52:31.548537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.846 [2024-11-27 00:52:31.548576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:54.846 [2024-11-27 00:52:31.548595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.548998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:54.846 [2024-11-27 00:52:31.549291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:54.847 [2024-11-27 00:52:31.549568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:54.847 [2024-11-27 00:52:31.549577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d13e913d-e9f0-4de6-927a-adc927448444 00:29:54.847 [2024-11-27 00:52:31.549588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:54.847 [2024-11-27 00:52:31.549595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:54.847 [2024-11-27 00:52:31.549605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:54.847 [2024-11-27 00:52:31.549613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:54.847 [2024-11-27 00:52:31.549642] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:54.847 [2024-11-27 00:52:31.549651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:54.847 [2024-11-27 00:52:31.549663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:54.847 [2024-11-27 00:52:31.549670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:54.847 [2024-11-27 00:52:31.549678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:54.847 [2024-11-27 00:52:31.549686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.847 [2024-11-27 00:52:31.549696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:54.847 [2024-11-27 00:52:31.549706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:29:54.847 [2024-11-27 00:52:31.549717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.552110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.847 [2024-11-27 00:52:31.552152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:54.847 [2024-11-27 00:52:31.552168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:29:54.847 [2024-11-27 00:52:31.552179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.552300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.847 [2024-11-27 00:52:31.552313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:54.847 [2024-11-27 00:52:31.552322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:54.847 [2024-11-27 00:52:31.552331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.560559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.560616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.847 [2024-11-27 00:52:31.560627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.560638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.560702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.560713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.847 [2024-11-27 00:52:31.560722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.560734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.560817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.560834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.847 [2024-11-27 00:52:31.560843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.560888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.560908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.560919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.847 [2024-11-27 00:52:31.560929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.560939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.575374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.575583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.847 [2024-11-27 00:52:31.575606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.575616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.847 [2024-11-27 00:52:31.587079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.847 [2024-11-27 00:52:31.587198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.847 [2024-11-27 00:52:31.587283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.847 [2024-11-27 00:52:31.587395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:54.847 [2024-11-27 00:52:31.587468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.847 [2024-11-27 00:52:31.587543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.847 [2024-11-27 00:52:31.587619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.847 [2024-11-27 00:52:31.587628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.847 [2024-11-27 00:52:31.587641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.847 [2024-11-27 00:52:31.587791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.508 ms, result 0 00:29:54.847 true 00:29:54.847 00:52:31 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94722 00:29:54.847 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94722 ']' 00:29:54.847 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94722 00:29:54.847 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:54.848 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.848 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94722 00:29:55.109 killing process with pid 94722 00:29:55.109 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.109 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.109 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94722' 00:29:55.109 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94722 00:29:55.109 00:52:31 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94722 00:29:59.322 00:52:36 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:03.529 262144+0 records in 00:30:03.529 262144+0 records out 00:30:03.529 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.49634 s, 307 MB/s 00:30:03.529 00:52:39 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:04.472 00:52:41 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:04.472 [2024-11-27 00:52:41.251019] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:30:04.472 [2024-11-27 00:52:41.251141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94932 ] 00:30:04.732 [2024-11-27 00:52:41.412598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.733 [2024-11-27 00:52:41.438596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.995 [2024-11-27 00:52:41.542056] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:04.995 [2024-11-27 00:52:41.542131] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:04.995 [2024-11-27 00:52:41.697422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.697479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:04.995 [2024-11-27 00:52:41.697492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:04.995 [2024-11-27 00:52:41.697502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.697555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.697566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:04.995 [2024-11-27 00:52:41.697577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:04.995 [2024-11-27 00:52:41.697589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.697621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:04.995 [2024-11-27 00:52:41.697885] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:04.995 [2024-11-27 00:52:41.697907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.697920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:04.995 [2024-11-27 00:52:41.697931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:30:04.995 [2024-11-27 00:52:41.697938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.699194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:04.995 [2024-11-27 00:52:41.701704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.701743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:04.995 [2024-11-27 00:52:41.701759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:30:04.995 [2024-11-27 00:52:41.701769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.701824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.701837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:04.995 [2024-11-27 00:52:41.701845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:04.995 [2024-11-27 00:52:41.701866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.707096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.707130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:04.995 [2024-11-27 00:52:41.707143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:30:04.995 [2024-11-27 00:52:41.707150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.707238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.707248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:04.995 [2024-11-27 00:52:41.707256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:04.995 [2024-11-27 00:52:41.707264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.707310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.707325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:04.995 [2024-11-27 00:52:41.707333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:04.995 [2024-11-27 00:52:41.707343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.707366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:04.995 [2024-11-27 00:52:41.708758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.708790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:04.995 [2024-11-27 00:52:41.708799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:30:04.995 [2024-11-27 00:52:41.708807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.708835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.708843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:04.995 [2024-11-27 00:52:41.708863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:04.995 [2024-11-27 00:52:41.708876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.708895] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:04.995 [2024-11-27 00:52:41.708917] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:04.995 [2024-11-27 00:52:41.708950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:04.995 [2024-11-27 00:52:41.708970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:04.995 [2024-11-27 00:52:41.709078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:04.995 [2024-11-27 00:52:41.709089] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:04.995 [2024-11-27 00:52:41.709101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:04.995 [2024-11-27 00:52:41.709112] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:04.995 [2024-11-27 00:52:41.709120] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:04.995 [2024-11-27 00:52:41.709128] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:04.995 [2024-11-27 00:52:41.709139] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:04.995 [2024-11-27 00:52:41.709146] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:04.995 [2024-11-27 00:52:41.709153] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:04.995 [2024-11-27 00:52:41.709160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.709168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:04.995 [2024-11-27 00:52:41.709175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:30:04.995 [2024-11-27 00:52:41.709182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.709271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.995 [2024-11-27 00:52:41.709279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:04.995 [2024-11-27 00:52:41.709286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:04.995 [2024-11-27 00:52:41.709293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.995 [2024-11-27 00:52:41.709394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:04.995 [2024-11-27 00:52:41.709404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:04.995 [2024-11-27 00:52:41.709412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.995 [2024-11-27 00:52:41.709425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.995 [2024-11-27 00:52:41.709433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:04.995 [2024-11-27 00:52:41.709440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:04.995 [2024-11-27 00:52:41.709446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:04.995 [2024-11-27 00:52:41.709454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:04.996 [2024-11-27 00:52:41.709461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.996 [2024-11-27 00:52:41.709474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:04.996 [2024-11-27 00:52:41.709482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:04.996 [2024-11-27 00:52:41.709489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:04.996 [2024-11-27 00:52:41.709496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:04.996 [2024-11-27 00:52:41.709503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:04.996 [2024-11-27 00:52:41.709509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:04.996 [2024-11-27 00:52:41.709522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:04.996 [2024-11-27 00:52:41.709542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:04.996 [2024-11-27 00:52:41.709562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:04.996 [2024-11-27 00:52:41.709583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:04.996 [2024-11-27 00:52:41.709618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:04.996 [2024-11-27 00:52:41.709639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.996 [2024-11-27 00:52:41.709652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:04.996 [2024-11-27 00:52:41.709659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:04.996 [2024-11-27 00:52:41.709666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:04.996 [2024-11-27 00:52:41.709672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:04.996 [2024-11-27 00:52:41.709679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:04.996 [2024-11-27 00:52:41.709686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:04.996 [2024-11-27 00:52:41.709699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:04.996 [2024-11-27 00:52:41.709706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709716] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:04.996 [2024-11-27 00:52:41.709726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:04.996 [2024-11-27 00:52:41.709734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:04.996 [2024-11-27 00:52:41.709749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:04.996 [2024-11-27 00:52:41.709755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:04.996 [2024-11-27 00:52:41.709762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:04.996 [2024-11-27 00:52:41.709769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:04.996 [2024-11-27 00:52:41.709776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:04.996 [2024-11-27 00:52:41.709783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:04.996 [2024-11-27 00:52:41.709791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:04.996 [2024-11-27 00:52:41.709800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:04.996 [2024-11-27 00:52:41.709815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:04.996 [2024-11-27 00:52:41.709822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:04.996 [2024-11-27 00:52:41.709830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:04.996 [2024-11-27 00:52:41.709839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:04.996 [2024-11-27 00:52:41.709847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:04.996 [2024-11-27 00:52:41.709865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:04.996 [2024-11-27 00:52:41.709873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:04.996 [2024-11-27 00:52:41.709880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:04.996 [2024-11-27 00:52:41.709888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:04.996 [2024-11-27 00:52:41.709924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:04.996 [2024-11-27 00:52:41.709933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:04.996 [2024-11-27 00:52:41.709949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:04.996 [2024-11-27 00:52:41.709957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:04.996 [2024-11-27 00:52:41.709964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:04.996 [2024-11-27 00:52:41.709975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.709986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:04.996 [2024-11-27 00:52:41.709994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:30:04.996 [2024-11-27 00:52:41.710004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.719355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.719390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:04.996 [2024-11-27 00:52:41.719399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.309 ms 00:30:04.996 [2024-11-27 00:52:41.719408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.719489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.719498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:04.996 [2024-11-27 00:52:41.719511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:04.996 [2024-11-27 00:52:41.719518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.746917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.747003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:04.996 [2024-11-27 00:52:41.747035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.345 ms 00:30:04.996 [2024-11-27 00:52:41.747072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.747168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.747194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.996 [2024-11-27 00:52:41.747217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:04.996 [2024-11-27 00:52:41.747237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.747850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.747945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.996 [2024-11-27 00:52:41.747969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:30:04.996 [2024-11-27 00:52:41.747990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.748297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.748340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.996 [2024-11-27 00:52:41.748362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:30:04.996 [2024-11-27 00:52:41.748381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.754561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.754587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.996 [2024-11-27 00:52:41.754596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.136 ms 00:30:04.996 [2024-11-27 00:52:41.754603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.756943] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:04.996 [2024-11-27 00:52:41.756981] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:04.996 [2024-11-27 00:52:41.756996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.757004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:04.996 [2024-11-27 00:52:41.757012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:30:04.996 [2024-11-27 00:52:41.757019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.771402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.771442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:04.996 [2024-11-27 00:52:41.771457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.346 ms 00:30:04.996 [2024-11-27 00:52:41.771464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.773190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.773216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:04.996 [2024-11-27 00:52:41.773225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.688 ms 00:30:04.996 [2024-11-27 00:52:41.773232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.996 [2024-11-27 00:52:41.774791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.996 [2024-11-27 00:52:41.774822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:04.997 [2024-11-27 00:52:41.774830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:30:04.997 [2024-11-27 00:52:41.774837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.997 [2024-11-27 00:52:41.775179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.997 [2024-11-27 00:52:41.775196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:04.997 [2024-11-27 00:52:41.775204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:04.997 [2024-11-27 00:52:41.775212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.258 [2024-11-27 00:52:41.790464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.790510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:05.259 [2024-11-27 00:52:41.790521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.236 ms 00:30:05.259 [2024-11-27 00:52:41.790529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.797942] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:05.259 [2024-11-27 00:52:41.800474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.800509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:05.259 [2024-11-27 00:52:41.800523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.905 ms 00:30:05.259 [2024-11-27 00:52:41.800530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.800607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.800620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:05.259 [2024-11-27 00:52:41.800629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:05.259 [2024-11-27 00:52:41.800640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.800712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.800728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:05.259 [2024-11-27 00:52:41.800740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:05.259 [2024-11-27 00:52:41.800747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.800769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.800781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:05.259 [2024-11-27 00:52:41.800788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:05.259 [2024-11-27 00:52:41.800795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.800825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:05.259 [2024-11-27 00:52:41.800835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.800846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:05.259 [2024-11-27 00:52:41.800866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:05.259 [2024-11-27 00:52:41.800876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.804206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.804235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:05.259 [2024-11-27 00:52:41.804243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:30:05.259 [2024-11-27 00:52:41.804251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.804317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.259 [2024-11-27 00:52:41.804326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:05.259 [2024-11-27 00:52:41.804334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:05.259 [2024-11-27 00:52:41.804341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.259 [2024-11-27 00:52:41.805219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.394 ms, result 0 00:30:06.203  [2024-11-27T00:52:43.934Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-27T00:52:44.878Z] Copying: 37/1024 [MB] (14 MBps) [2024-11-27T00:52:45.819Z] Copying: 61/1024 [MB] (24 MBps) [2024-11-27T00:52:47.206Z] Copying: 97/1024 [MB] (36 MBps) [2024-11-27T00:52:48.151Z] Copying: 132/1024 [MB] (35 MBps) [2024-11-27T00:52:49.092Z] Copying: 146/1024 [MB] (13 MBps) [2024-11-27T00:52:50.039Z] Copying: 160/1024 [MB] (14 MBps) [2024-11-27T00:52:50.982Z] Copying: 174/1024 [MB] (13 MBps) [2024-11-27T00:52:51.927Z] Copying: 196/1024 [MB] (22 MBps) [2024-11-27T00:52:52.871Z] Copying: 214/1024 [MB] (17 MBps) [2024-11-27T00:52:54.259Z] Copying: 230/1024 [MB] (15 MBps) [2024-11-27T00:52:54.832Z] Copying: 242/1024 [MB] (12 MBps) [2024-11-27T00:52:56.218Z] Copying: 258/1024 [MB] (16 MBps) [2024-11-27T00:52:57.163Z] Copying: 271/1024 [MB] (12 MBps) [2024-11-27T00:52:58.104Z] Copying: 285/1024 [MB] (13 MBps) [2024-11-27T00:52:59.047Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-27T00:52:59.991Z] Copying: 316/1024 [MB] (20 MBps) [2024-11-27T00:53:00.934Z] Copying: 331/1024 [MB] (14 MBps) [2024-11-27T00:53:01.879Z] Copying: 345/1024 [MB] (13 MBps) [2024-11-27T00:53:02.843Z] Copying: 363/1024 [MB] (18 MBps) [2024-11-27T00:53:04.249Z] Copying: 378/1024 [MB] (14 MBps) [2024-11-27T00:53:04.822Z] Copying: 393/1024 [MB] (15 MBps) [2024-11-27T00:53:06.209Z] Copying: 408/1024 [MB] (15 MBps) [2024-11-27T00:53:07.155Z] Copying: 425/1024 [MB] (16 MBps) [2024-11-27T00:53:08.102Z] Copying: 439/1024 [MB] (14 MBps) [2024-11-27T00:53:09.048Z] Copying: 454/1024 [MB] (15 MBps) [2024-11-27T00:53:09.993Z] Copying: 467/1024 [MB] (13 MBps) [2024-11-27T00:53:10.937Z] Copying: 483/1024 [MB] (16 MBps) [2024-11-27T00:53:11.883Z] Copying: 500/1024 [MB] (16 MBps) [2024-11-27T00:53:12.828Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-27T00:53:14.214Z] Copying: 533/1024 [MB] (15 MBps) [2024-11-27T00:53:15.157Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-27T00:53:16.099Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-27T00:53:17.043Z] Copying: 570/1024 [MB] (15 MBps) [2024-11-27T00:53:17.987Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-27T00:53:18.932Z] Copying: 607/1024 [MB] (23 MBps) [2024-11-27T00:53:19.876Z] Copying: 629/1024 [MB] (21 MBps) [2024-11-27T00:53:20.822Z] Copying: 648/1024 [MB] (19 MBps) [2024-11-27T00:53:22.211Z] Copying: 663/1024 [MB] (15 MBps) [2024-11-27T00:53:23.156Z] Copying: 681/1024 [MB] (18 MBps) [2024-11-27T00:53:24.103Z] Copying: 702/1024 [MB] (20 MBps) [2024-11-27T00:53:25.049Z] Copying: 716/1024 [MB] (13 MBps) [2024-11-27T00:53:25.993Z] Copying: 736/1024 [MB] (19 MBps) [2024-11-27T00:53:26.938Z] Copying: 753/1024 [MB] (17 MBps) [2024-11-27T00:53:27.882Z] Copying: 774/1024 [MB] (20 MBps) [2024-11-27T00:53:28.824Z] Copying: 791/1024 [MB] (17 MBps) [2024-11-27T00:53:30.218Z] Copying: 821/1024 [MB] (29 MBps) [2024-11-27T00:53:31.161Z] Copying: 851/1024 [MB] (30 MBps) [2024-11-27T00:53:32.205Z] Copying: 868/1024 [MB] (17 MBps) [2024-11-27T00:53:33.149Z] Copying: 885/1024 [MB] (16 MBps) [2024-11-27T00:53:34.095Z] Copying: 901/1024 [MB] (16 MBps) [2024-11-27T00:53:35.039Z] Copying: 922/1024 [MB] (20 MBps) [2024-11-27T00:53:35.983Z] Copying: 958/1024 [MB] (35 MBps) [2024-11-27T00:53:36.929Z] Copying: 994/1024 [MB] (35 MBps) [2024-11-27T00:53:36.929Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 00:53:36.660535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-27 00:53:36.660575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:00.142 [2024-11-27 00:53:36.660586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:00.142 [2024-11-27 00:53:36.660598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-27 00:53:36.660614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:00.142 [2024-11-27 00:53:36.661029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-27 00:53:36.661045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:00.142 [2024-11-27 00:53:36.661053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:31:00.142 [2024-11-27 00:53:36.661059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-27 00:53:36.662526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-27 00:53:36.662554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:00.142 [2024-11-27 00:53:36.662562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:31:00.142 [2024-11-27 00:53:36.662568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-27 00:53:36.662590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-27 00:53:36.662597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:00.142 [2024-11-27 00:53:36.662607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:00.142 [2024-11-27 00:53:36.662612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-27 00:53:36.662649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.142 [2024-11-27 00:53:36.662660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:00.142 [2024-11-27 00:53:36.662666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:00.142 [2024-11-27 00:53:36.662672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.142 [2024-11-27 00:53:36.662682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:00.142 [2024-11-27 00:53:36.662693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:00.142 [2024-11-27 00:53:36.662788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.662997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:00.143 [2024-11-27 00:53:36.663207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:00.144 [2024-11-27 00:53:36.663305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:00.144 [2024-11-27 00:53:36.663311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d13e913d-e9f0-4de6-927a-adc927448444 00:31:00.144 [2024-11-27 00:53:36.663317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:00.144 [2024-11-27 00:53:36.663323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:00.144 [2024-11-27 00:53:36.663329] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:00.144 [2024-11-27 00:53:36.663334] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:00.144 [2024-11-27 00:53:36.663343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:00.144 [2024-11-27 00:53:36.663349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:00.144 [2024-11-27 00:53:36.663354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:00.144 [2024-11-27 00:53:36.663359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:00.144 [2024-11-27 00:53:36.663364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:00.144 [2024-11-27 00:53:36.663369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.144 [2024-11-27 00:53:36.663375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:00.144 [2024-11-27 00:53:36.663382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:31:00.144 [2024-11-27 00:53:36.663387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.664562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.144 [2024-11-27 00:53:36.664583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:00.144 [2024-11-27 00:53:36.664590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:31:00.144 [2024-11-27 00:53:36.664600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.664666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.144 [2024-11-27 00:53:36.664682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:00.144 [2024-11-27 00:53:36.664689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:00.144 [2024-11-27 00:53:36.664694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.668775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.668798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:00.144 [2024-11-27 00:53:36.668805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.668811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.668861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.668871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:00.144 [2024-11-27 00:53:36.668877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.668883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.668904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.668916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:00.144 [2024-11-27 00:53:36.668922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.668928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.668939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.668948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:00.144 [2024-11-27 00:53:36.668956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.668966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.676508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.676540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:00.144 [2024-11-27 00:53:36.676547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.676553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:00.144 [2024-11-27 00:53:36.682538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.682545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:00.144 [2024-11-27 00:53:36.682594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.682600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:00.144 [2024-11-27 00:53:36.682630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.682638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:00.144 [2024-11-27 00:53:36.682697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.682703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:00.144 [2024-11-27 00:53:36.682733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.144 [2024-11-27 00:53:36.682739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.144 [2024-11-27 00:53:36.682767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.144 [2024-11-27 00:53:36.682774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:00.145 [2024-11-27 00:53:36.682780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.145 [2024-11-27 00:53:36.682785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.145 [2024-11-27 00:53:36.682815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:00.145 [2024-11-27 00:53:36.682822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:00.145 [2024-11-27 00:53:36.682827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:00.145 [2024-11-27 00:53:36.682835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.145 [2024-11-27 00:53:36.682968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 22.406 ms, result 0 00:31:00.145 00:31:00.145 00:31:00.406 00:53:36 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:00.406 [2024-11-27 00:53:36.997020] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:31:00.406 [2024-11-27 00:53:36.997138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95489 ] 00:31:00.406 [2024-11-27 00:53:37.151371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.406 [2024-11-27 00:53:37.172546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.669 [2024-11-27 00:53:37.257282] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:00.669 [2024-11-27 00:53:37.257337] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:00.669 [2024-11-27 00:53:37.413046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.413097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:00.669 [2024-11-27 00:53:37.413110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.669 [2024-11-27 00:53:37.413118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.413167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.413178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:00.669 [2024-11-27 00:53:37.413186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:00.669 [2024-11-27 00:53:37.413193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.413214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:00.669 [2024-11-27 00:53:37.413751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:00.669 [2024-11-27 00:53:37.413793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.413804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:00.669 [2024-11-27 00:53:37.413816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:31:00.669 [2024-11-27 00:53:37.413828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.414194] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:00.669 [2024-11-27 00:53:37.414238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.414247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:00.669 [2024-11-27 00:53:37.414256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:00.669 [2024-11-27 00:53:37.414266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.414312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.414321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:00.669 [2024-11-27 00:53:37.414329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:00.669 [2024-11-27 00:53:37.414339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.414569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.414586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:00.669 [2024-11-27 00:53:37.414594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:31:00.669 [2024-11-27 00:53:37.414604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.669 [2024-11-27 00:53:37.414678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.669 [2024-11-27 00:53:37.414690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:00.669 [2024-11-27 00:53:37.414698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:00.670 [2024-11-27 00:53:37.414704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.414725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.670 [2024-11-27 00:53:37.414733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:00.670 [2024-11-27 00:53:37.414741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:00.670 [2024-11-27 00:53:37.414747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.414769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:00.670 [2024-11-27 00:53:37.416248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.670 [2024-11-27 00:53:37.416275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:00.670 [2024-11-27 00:53:37.416283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:31:00.670 [2024-11-27 00:53:37.416294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.416325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.670 [2024-11-27 00:53:37.416333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:00.670 [2024-11-27 00:53:37.416340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:00.670 [2024-11-27 00:53:37.416347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.416368] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:00.670 [2024-11-27 00:53:37.416388] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:00.670 [2024-11-27 00:53:37.416420] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:00.670 [2024-11-27 00:53:37.416438] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:00.670 [2024-11-27 00:53:37.416539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:00.670 [2024-11-27 00:53:37.416554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:00.670 [2024-11-27 00:53:37.416564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:00.670 [2024-11-27 00:53:37.416573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:00.670 [2024-11-27 00:53:37.416586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:00.670 [2024-11-27 00:53:37.416597] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:00.670 [2024-11-27 00:53:37.416604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:00.670 [2024-11-27 00:53:37.416611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:00.670 [2024-11-27 00:53:37.416618] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:00.670 [2024-11-27 00:53:37.416625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.670 [2024-11-27 00:53:37.416632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:00.670 [2024-11-27 00:53:37.416639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:31:00.670 [2024-11-27 00:53:37.416646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.416735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.670 [2024-11-27 00:53:37.416746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:00.670 [2024-11-27 00:53:37.416754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:00.670 [2024-11-27 00:53:37.416760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.670 [2024-11-27 00:53:37.416876] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:00.670 [2024-11-27 00:53:37.416887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:00.670 [2024-11-27 00:53:37.416904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:00.670 [2024-11-27 00:53:37.416917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.416926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:00.670 [2024-11-27 00:53:37.416938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.416946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:00.670 [2024-11-27 00:53:37.416954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:00.670 [2024-11-27 00:53:37.416962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:00.670 [2024-11-27 00:53:37.416969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:00.670 [2024-11-27 00:53:37.416980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:00.670 [2024-11-27 00:53:37.416988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:00.670 [2024-11-27 00:53:37.416995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:00.670 [2024-11-27 00:53:37.417003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:00.670 [2024-11-27 00:53:37.417010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:00.670 [2024-11-27 00:53:37.417018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:00.670 [2024-11-27 00:53:37.417033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:00.670 [2024-11-27 00:53:37.417043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:00.670 [2024-11-27 00:53:37.417058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.670 [2024-11-27 00:53:37.417073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:00.670 [2024-11-27 00:53:37.417081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.670 [2024-11-27 00:53:37.417096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:00.670 [2024-11-27 00:53:37.417103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.670 [2024-11-27 00:53:37.417118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:00.670 [2024-11-27 00:53:37.417125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:00.670 [2024-11-27 00:53:37.417140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:00.670 [2024-11-27 00:53:37.417148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:00.670 [2024-11-27 00:53:37.417166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:00.670 [2024-11-27 00:53:37.417173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:00.670 [2024-11-27 00:53:37.417180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:00.670 [2024-11-27 00:53:37.417187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:00.670 [2024-11-27 00:53:37.417195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:00.670 [2024-11-27 00:53:37.417202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:00.670 [2024-11-27 00:53:37.417217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:00.670 [2024-11-27 00:53:37.417225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.670 [2024-11-27 00:53:37.417233] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:00.671 [2024-11-27 00:53:37.417242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:00.671 [2024-11-27 00:53:37.417250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:00.671 [2024-11-27 00:53:37.417260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:00.671 [2024-11-27 00:53:37.417270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:00.671 [2024-11-27 00:53:37.417277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:00.671 [2024-11-27 00:53:37.417285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:00.671 [2024-11-27 00:53:37.417294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:00.671 [2024-11-27 00:53:37.417302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:00.671 [2024-11-27 00:53:37.417309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:00.671 [2024-11-27 00:53:37.417318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:00.671 [2024-11-27 00:53:37.417332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:00.671 [2024-11-27 00:53:37.417350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:00.671 [2024-11-27 00:53:37.417358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:00.671 [2024-11-27 00:53:37.417366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:00.671 [2024-11-27 00:53:37.417374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:00.671 [2024-11-27 00:53:37.417381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:00.671 [2024-11-27 00:53:37.417388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:00.671 [2024-11-27 00:53:37.417395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:00.671 [2024-11-27 00:53:37.417402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:00.671 [2024-11-27 00:53:37.417409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:00.671 [2024-11-27 00:53:37.417446] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:00.671 [2024-11-27 00:53:37.417453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:00.671 [2024-11-27 00:53:37.417468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:00.671 [2024-11-27 00:53:37.417475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:00.671 [2024-11-27 00:53:37.417483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:00.671 [2024-11-27 00:53:37.417490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.417497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:00.671 [2024-11-27 00:53:37.417505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:31:00.671 [2024-11-27 00:53:37.417534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.423934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.423957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:00.671 [2024-11-27 00:53:37.423966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.334 ms 00:31:00.671 [2024-11-27 00:53:37.423974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.424048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.424056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:00.671 [2024-11-27 00:53:37.424064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:00.671 [2024-11-27 00:53:37.424071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.442750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.442811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:00.671 [2024-11-27 00:53:37.442829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.636 ms 00:31:00.671 [2024-11-27 00:53:37.442841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.442912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.442929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:00.671 [2024-11-27 00:53:37.442942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.671 [2024-11-27 00:53:37.442953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.443103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.443119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:00.671 [2024-11-27 00:53:37.443132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:00.671 [2024-11-27 00:53:37.443143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.443311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.443340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:00.671 [2024-11-27 00:53:37.443356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:31:00.671 [2024-11-27 00:53:37.443368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.450032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.450080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:00.671 [2024-11-27 00:53:37.450101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:31:00.671 [2024-11-27 00:53:37.450118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.671 [2024-11-27 00:53:37.450266] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:00.671 [2024-11-27 00:53:37.450289] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:00.671 [2024-11-27 00:53:37.450303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.671 [2024-11-27 00:53:37.450314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:00.671 [2024-11-27 00:53:37.450331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:00.671 [2024-11-27 00:53:37.450341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.464217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.933 [2024-11-27 00:53:37.464254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:00.933 [2024-11-27 00:53:37.464264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.855 ms 00:31:00.933 [2024-11-27 00:53:37.464274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.464384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.933 [2024-11-27 00:53:37.464393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:00.933 [2024-11-27 00:53:37.464408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:00.933 [2024-11-27 00:53:37.464421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.464467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.933 [2024-11-27 00:53:37.464477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:00.933 [2024-11-27 00:53:37.464484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:00.933 [2024-11-27 00:53:37.464491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.464782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.933 [2024-11-27 00:53:37.464799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:00.933 [2024-11-27 00:53:37.464807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:31:00.933 [2024-11-27 00:53:37.464816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.464831] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:00.933 [2024-11-27 00:53:37.464842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.933 [2024-11-27 00:53:37.464850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:00.933 [2024-11-27 00:53:37.464872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:00.933 [2024-11-27 00:53:37.464880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.933 [2024-11-27 00:53:37.473020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:00.933 [2024-11-27 00:53:37.473141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.473150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:00.934 [2024-11-27 00:53:37.473158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.244 ms 00:31:00.934 [2024-11-27 00:53:37.473173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.475377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.475408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:00.934 [2024-11-27 00:53:37.475422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:31:00.934 [2024-11-27 00:53:37.475428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.475492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.475501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:00.934 [2024-11-27 00:53:37.475511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:00.934 [2024-11-27 00:53:37.475518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.475553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.475563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:00.934 [2024-11-27 00:53:37.475570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:00.934 [2024-11-27 00:53:37.475577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.475605] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:00.934 [2024-11-27 00:53:37.475616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.475623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:00.934 [2024-11-27 00:53:37.475631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:00.934 [2024-11-27 00:53:37.475639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.480197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.480236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:00.934 [2024-11-27 00:53:37.480245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.539 ms 00:31:00.934 [2024-11-27 00:53:37.480252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.480320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:00.934 [2024-11-27 00:53:37.480330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:00.934 [2024-11-27 00:53:37.480338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:00.934 [2024-11-27 00:53:37.480344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:00.934 [2024-11-27 00:53:37.481213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.773 ms, result 0 00:31:01.879  [2024-11-27T00:53:40.054Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-27T00:53:40.998Z] Copying: 34/1024 [MB] (21 MBps) [2024-11-27T00:53:41.943Z] Copying: 57/1024 [MB] (23 MBps) [2024-11-27T00:53:42.888Z] Copying: 74/1024 [MB] (16 MBps) [2024-11-27T00:53:43.832Z] Copying: 89/1024 [MB] (14 MBps) [2024-11-27T00:53:44.777Z] Copying: 113/1024 [MB] (24 MBps) [2024-11-27T00:53:45.721Z] Copying: 135/1024 [MB] (21 MBps) [2024-11-27T00:53:46.664Z] Copying: 155/1024 [MB] (20 MBps) [2024-11-27T00:53:48.052Z] Copying: 171/1024 [MB] (16 MBps) [2024-11-27T00:53:48.996Z] Copying: 181/1024 [MB] (10 MBps) [2024-11-27T00:53:49.939Z] Copying: 192/1024 [MB] (10 MBps) [2024-11-27T00:53:50.883Z] Copying: 209/1024 [MB] (17 MBps) [2024-11-27T00:53:51.826Z] Copying: 228/1024 [MB] (18 MBps) [2024-11-27T00:53:52.768Z] Copying: 245/1024 [MB] (17 MBps) [2024-11-27T00:53:53.706Z] Copying: 270/1024 [MB] (24 MBps) [2024-11-27T00:53:55.092Z] Copying: 286/1024 [MB] (16 MBps) [2024-11-27T00:53:55.664Z] Copying: 301/1024 [MB] (14 MBps) [2024-11-27T00:53:57.049Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-27T00:53:57.994Z] Copying: 324/1024 [MB] (12 MBps) [2024-11-27T00:53:58.940Z] Copying: 343/1024 [MB] (18 MBps) [2024-11-27T00:53:59.884Z] Copying: 364/1024 [MB] (20 MBps) [2024-11-27T00:54:00.891Z] Copying: 384/1024 [MB] (20 MBps) [2024-11-27T00:54:01.835Z] Copying: 400/1024 [MB] (16 MBps) [2024-11-27T00:54:02.778Z] Copying: 420/1024 [MB] (19 MBps) [2024-11-27T00:54:03.723Z] Copying: 433/1024 [MB] (12 MBps) [2024-11-27T00:54:04.670Z] Copying: 453/1024 [MB] (19 MBps) [2024-11-27T00:54:06.055Z] Copying: 467/1024 [MB] (14 MBps) [2024-11-27T00:54:06.999Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-27T00:54:07.943Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-27T00:54:08.888Z] Copying: 501/1024 [MB] (12 MBps) [2024-11-27T00:54:09.831Z] Copying: 519/1024 [MB] (18 MBps) [2024-11-27T00:54:10.777Z] Copying: 533/1024 [MB] (14 MBps) [2024-11-27T00:54:11.721Z] Copying: 554/1024 [MB] (21 MBps) [2024-11-27T00:54:12.667Z] Copying: 575/1024 [MB] (20 MBps) [2024-11-27T00:54:14.056Z] Copying: 598/1024 [MB] (23 MBps) [2024-11-27T00:54:15.000Z] Copying: 620/1024 [MB] (21 MBps) [2024-11-27T00:54:15.945Z] Copying: 641/1024 [MB] (21 MBps) [2024-11-27T00:54:16.891Z] Copying: 668/1024 [MB] (26 MBps) [2024-11-27T00:54:17.835Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-27T00:54:18.780Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-27T00:54:19.720Z] Copying: 704/1024 [MB] (15 MBps) [2024-11-27T00:54:20.661Z] Copying: 731/1024 [MB] (26 MBps) [2024-11-27T00:54:22.046Z] Copying: 749/1024 [MB] (17 MBps) [2024-11-27T00:54:22.990Z] Copying: 767/1024 [MB] (18 MBps) [2024-11-27T00:54:23.929Z] Copying: 788/1024 [MB] (21 MBps) [2024-11-27T00:54:24.869Z] Copying: 806/1024 [MB] (17 MBps) [2024-11-27T00:54:25.810Z] Copying: 828/1024 [MB] (21 MBps) [2024-11-27T00:54:26.751Z] Copying: 844/1024 [MB] (16 MBps) [2024-11-27T00:54:27.691Z] Copying: 857/1024 [MB] (12 MBps) [2024-11-27T00:54:29.078Z] Copying: 876/1024 [MB] (18 MBps) [2024-11-27T00:54:29.742Z] Copying: 890/1024 [MB] (13 MBps) [2024-11-27T00:54:30.681Z] Copying: 906/1024 [MB] (16 MBps) [2024-11-27T00:54:32.065Z] Copying: 926/1024 [MB] (20 MBps) [2024-11-27T00:54:33.009Z] Copying: 946/1024 [MB] (19 MBps) [2024-11-27T00:54:33.946Z] Copying: 965/1024 [MB] (19 MBps) [2024-11-27T00:54:34.889Z] Copying: 981/1024 [MB] (15 MBps) [2024-11-27T00:54:35.832Z] Copying: 1006/1024 [MB] (24 MBps) [2024-11-27T00:54:35.832Z] Copying: 1020/1024 [MB] (13 MBps) [2024-11-27T00:54:36.401Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-27 00:54:36.110466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.614 [2024-11-27 00:54:36.110532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:59.614 [2024-11-27 00:54:36.110546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:59.614 [2024-11-27 00:54:36.110556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.614 [2024-11-27 00:54:36.110581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:59.614 [2024-11-27 00:54:36.111180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.614 [2024-11-27 00:54:36.111205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:59.614 [2024-11-27 00:54:36.111213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:31:59.614 [2024-11-27 00:54:36.111221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.614 [2024-11-27 00:54:36.111412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.615 [2024-11-27 00:54:36.111421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:59.615 [2024-11-27 00:54:36.111429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:59.615 [2024-11-27 00:54:36.111436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.615 [2024-11-27 00:54:36.111471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.615 [2024-11-27 00:54:36.111480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:59.615 [2024-11-27 00:54:36.111488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:59.615 [2024-11-27 00:54:36.111494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.615 [2024-11-27 00:54:36.111545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.615 [2024-11-27 00:54:36.111552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:59.615 [2024-11-27 00:54:36.111560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:59.615 [2024-11-27 00:54:36.111567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.615 [2024-11-27 00:54:36.111579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:59.615 [2024-11-27 00:54:36.111593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.111995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:59.615 [2024-11-27 00:54:36.112101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:59.616 [2024-11-27 00:54:36.112249] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:59.616 [2024-11-27 00:54:36.112256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d13e913d-e9f0-4de6-927a-adc927448444 00:31:59.616 [2024-11-27 00:54:36.112262] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:59.616 [2024-11-27 00:54:36.112267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:59.616 [2024-11-27 00:54:36.112277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:59.616 [2024-11-27 00:54:36.112283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:59.616 [2024-11-27 00:54:36.112288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:59.616 [2024-11-27 00:54:36.112295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:59.616 [2024-11-27 00:54:36.112303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:59.616 [2024-11-27 00:54:36.112309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:59.616 [2024-11-27 00:54:36.112314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:59.616 [2024-11-27 00:54:36.112320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.616 [2024-11-27 00:54:36.112326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:59.616 [2024-11-27 00:54:36.112765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:31:59.616 [2024-11-27 00:54:36.112775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.114659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.616 [2024-11-27 00:54:36.114688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:59.616 [2024-11-27 00:54:36.114697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:31:59.616 [2024-11-27 00:54:36.114705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.114804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.616 [2024-11-27 00:54:36.114815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:59.616 [2024-11-27 00:54:36.114823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:59.616 [2024-11-27 00:54:36.114829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.121944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.121977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.616 [2024-11-27 00:54:36.121987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.121995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.122055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.122068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.616 [2024-11-27 00:54:36.122076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.122084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.122141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.122152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.616 [2024-11-27 00:54:36.122161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.122173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.122190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.122198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.616 [2024-11-27 00:54:36.122210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.122217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.135225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.135260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.616 [2024-11-27 00:54:36.135275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.135282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.616 [2024-11-27 00:54:36.144366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.616 [2024-11-27 00:54:36.144432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.616 [2024-11-27 00:54:36.144472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.616 [2024-11-27 00:54:36.144543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:59.616 [2024-11-27 00:54:36.144581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.616 [2024-11-27 00:54:36.144641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.616 [2024-11-27 00:54:36.144693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.616 [2024-11-27 00:54:36.144699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.616 [2024-11-27 00:54:36.144711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.616 [2024-11-27 00:54:36.144822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.329 ms, result 0 00:31:59.616 00:31:59.616 00:31:59.616 00:54:36 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:02.159 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:02.159 00:54:38 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:02.159 [2024-11-27 00:54:38.544720] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:32:02.159 [2024-11-27 00:54:38.544841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96109 ] 00:32:02.159 [2024-11-27 00:54:38.698177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.159 [2024-11-27 00:54:38.721387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.159 [2024-11-27 00:54:38.821230] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:02.159 [2024-11-27 00:54:38.821287] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:02.421 [2024-11-27 00:54:38.968064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.968099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:02.421 [2024-11-27 00:54:38.968110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:02.421 [2024-11-27 00:54:38.968117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.968155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.968165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:02.421 [2024-11-27 00:54:38.968174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:02.421 [2024-11-27 00:54:38.968186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.968201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:02.421 [2024-11-27 00:54:38.968403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:02.421 [2024-11-27 00:54:38.968420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.968429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:02.421 [2024-11-27 00:54:38.968437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:32:02.421 [2024-11-27 00:54:38.968446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.968753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:02.421 [2024-11-27 00:54:38.968782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.968789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:02.421 [2024-11-27 00:54:38.968797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:02.421 [2024-11-27 00:54:38.968805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.968847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.968870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:02.421 [2024-11-27 00:54:38.968877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:02.421 [2024-11-27 00:54:38.968883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.969068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.969082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:02.421 [2024-11-27 00:54:38.969089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:32:02.421 [2024-11-27 00:54:38.969097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.969155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.969163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:02.421 [2024-11-27 00:54:38.969169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:02.421 [2024-11-27 00:54:38.969176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.969191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.969197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:02.421 [2024-11-27 00:54:38.969203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:02.421 [2024-11-27 00:54:38.969213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.969229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:02.421 [2024-11-27 00:54:38.970842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.970877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:02.421 [2024-11-27 00:54:38.970885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:32:02.421 [2024-11-27 00:54:38.970894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.970918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.970926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:02.421 [2024-11-27 00:54:38.970932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:02.421 [2024-11-27 00:54:38.970939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.970952] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:02.421 [2024-11-27 00:54:38.970969] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:02.421 [2024-11-27 00:54:38.970999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:02.421 [2024-11-27 00:54:38.971017] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:02.421 [2024-11-27 00:54:38.971098] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:02.421 [2024-11-27 00:54:38.971112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:02.421 [2024-11-27 00:54:38.971121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:02.421 [2024-11-27 00:54:38.971130] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971148] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:02.421 [2024-11-27 00:54:38.971155] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:02.421 [2024-11-27 00:54:38.971161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:02.421 [2024-11-27 00:54:38.971167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:02.421 [2024-11-27 00:54:38.971173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.971178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:02.421 [2024-11-27 00:54:38.971185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:32:02.421 [2024-11-27 00:54:38.971193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.971256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.421 [2024-11-27 00:54:38.971273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:02.421 [2024-11-27 00:54:38.971280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:02.421 [2024-11-27 00:54:38.971286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.421 [2024-11-27 00:54:38.971360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:02.421 [2024-11-27 00:54:38.971369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:02.421 [2024-11-27 00:54:38.971375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:02.421 [2024-11-27 00:54:38.971404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:02.421 [2024-11-27 00:54:38.971422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:02.421 [2024-11-27 00:54:38.971433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:02.421 [2024-11-27 00:54:38.971438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:02.421 [2024-11-27 00:54:38.971445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:02.421 [2024-11-27 00:54:38.971451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:02.421 [2024-11-27 00:54:38.971456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:02.421 [2024-11-27 00:54:38.971462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:02.421 [2024-11-27 00:54:38.971472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:02.421 [2024-11-27 00:54:38.971491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:02.421 [2024-11-27 00:54:38.971502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:02.421 [2024-11-27 00:54:38.971508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:02.421 [2024-11-27 00:54:38.971514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:02.422 [2024-11-27 00:54:38.971520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:02.422 [2024-11-27 00:54:38.971526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:02.422 [2024-11-27 00:54:38.971537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:02.422 [2024-11-27 00:54:38.971543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:02.422 [2024-11-27 00:54:38.971555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:02.422 [2024-11-27 00:54:38.971562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:02.422 [2024-11-27 00:54:38.971574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:02.422 [2024-11-27 00:54:38.971583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:02.422 [2024-11-27 00:54:38.971589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:02.422 [2024-11-27 00:54:38.971595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:02.422 [2024-11-27 00:54:38.971601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:02.422 [2024-11-27 00:54:38.971607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:02.422 [2024-11-27 00:54:38.971618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:02.422 [2024-11-27 00:54:38.971623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971630] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:02.422 [2024-11-27 00:54:38.971641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:02.422 [2024-11-27 00:54:38.971648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:02.422 [2024-11-27 00:54:38.971656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:02.422 [2024-11-27 00:54:38.971665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:02.422 [2024-11-27 00:54:38.971671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:02.422 [2024-11-27 00:54:38.971677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:02.422 [2024-11-27 00:54:38.971683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:02.422 [2024-11-27 00:54:38.971691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:02.422 [2024-11-27 00:54:38.971697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:02.422 [2024-11-27 00:54:38.971705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:02.422 [2024-11-27 00:54:38.971722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:02.422 [2024-11-27 00:54:38.971737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:02.422 [2024-11-27 00:54:38.971745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:02.422 [2024-11-27 00:54:38.971751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:02.422 [2024-11-27 00:54:38.971758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:02.422 [2024-11-27 00:54:38.971764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:02.422 [2024-11-27 00:54:38.971771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:02.422 [2024-11-27 00:54:38.971778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:02.422 [2024-11-27 00:54:38.971784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:02.422 [2024-11-27 00:54:38.971790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:02.422 [2024-11-27 00:54:38.971825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:02.422 [2024-11-27 00:54:38.971833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:02.422 [2024-11-27 00:54:38.971846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:02.422 [2024-11-27 00:54:38.971862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:02.422 [2024-11-27 00:54:38.971869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:02.422 [2024-11-27 00:54:38.971875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.971882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:02.422 [2024-11-27 00:54:38.971888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:32:02.422 [2024-11-27 00:54:38.971894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.979455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.979479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:02.422 [2024-11-27 00:54:38.979487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.512 ms 00:32:02.422 [2024-11-27 00:54:38.979493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.979555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.979562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:02.422 [2024-11-27 00:54:38.979568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:02.422 [2024-11-27 00:54:38.979574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.997740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.997786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:02.422 [2024-11-27 00:54:38.997803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:32:02.422 [2024-11-27 00:54:38.997813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.997865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.997879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:02.422 [2024-11-27 00:54:38.997891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:02.422 [2024-11-27 00:54:38.997902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.998029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.998049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:02.422 [2024-11-27 00:54:38.998061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:02.422 [2024-11-27 00:54:38.998071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:38.998238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:38.998263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:02.422 [2024-11-27 00:54:38.998280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:32:02.422 [2024-11-27 00:54:38.998293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.005444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:39.005478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:02.422 [2024-11-27 00:54:39.005501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.127 ms 00:32:02.422 [2024-11-27 00:54:39.005512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.005632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:02.422 [2024-11-27 00:54:39.005650] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:02.422 [2024-11-27 00:54:39.005664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:39.005676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:02.422 [2024-11-27 00:54:39.005687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:02.422 [2024-11-27 00:54:39.005699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.016691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:39.016715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:02.422 [2024-11-27 00:54:39.016723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.973 ms 00:32:02.422 [2024-11-27 00:54:39.016730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.016825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:39.016833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:02.422 [2024-11-27 00:54:39.016839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:02.422 [2024-11-27 00:54:39.016847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.016886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.422 [2024-11-27 00:54:39.016897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:02.422 [2024-11-27 00:54:39.016904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:02.422 [2024-11-27 00:54:39.016911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.422 [2024-11-27 00:54:39.017145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.017153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:02.423 [2024-11-27 00:54:39.017167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:32:02.423 [2024-11-27 00:54:39.017173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.017186] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:02.423 [2024-11-27 00:54:39.017194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.017202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:02.423 [2024-11-27 00:54:39.017213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:02.423 [2024-11-27 00:54:39.017218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.024307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:02.423 [2024-11-27 00:54:39.024403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.024410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:02.423 [2024-11-27 00:54:39.024418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.171 ms 00:32:02.423 [2024-11-27 00:54:39.024427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.026290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.026310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:02.423 [2024-11-27 00:54:39.026318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:32:02.423 [2024-11-27 00:54:39.026324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.026380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.026387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:02.423 [2024-11-27 00:54:39.026397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:02.423 [2024-11-27 00:54:39.026404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.026421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.026428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:02.423 [2024-11-27 00:54:39.026434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:02.423 [2024-11-27 00:54:39.026440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.026467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:02.423 [2024-11-27 00:54:39.026475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.026481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:02.423 [2024-11-27 00:54:39.026488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:02.423 [2024-11-27 00:54:39.026494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.030206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.030233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:02.423 [2024-11-27 00:54:39.030246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:32:02.423 [2024-11-27 00:54:39.030253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.030309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.423 [2024-11-27 00:54:39.030317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:02.423 [2024-11-27 00:54:39.030324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:02.423 [2024-11-27 00:54:39.030330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.423 [2024-11-27 00:54:39.031156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.772 ms, result 0 00:32:03.362  [2024-11-27T00:54:41.092Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-27T00:54:42.474Z] Copying: 40/1024 [MB] (16 MBps) [2024-11-27T00:54:43.407Z] Copying: 51/1024 [MB] (11 MBps) [2024-11-27T00:54:44.347Z] Copying: 75/1024 [MB] (23 MBps) [2024-11-27T00:54:45.292Z] Copying: 98/1024 [MB] (23 MBps) [2024-11-27T00:54:46.236Z] Copying: 113/1024 [MB] (14 MBps) [2024-11-27T00:54:47.170Z] Copying: 131/1024 [MB] (18 MBps) [2024-11-27T00:54:48.107Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-27T00:54:49.050Z] Copying: 183/1024 [MB] (24 MBps) [2024-11-27T00:54:50.435Z] Copying: 200/1024 [MB] (17 MBps) [2024-11-27T00:54:51.381Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-27T00:54:52.320Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-27T00:54:53.262Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-27T00:54:54.204Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-27T00:54:55.147Z] Copying: 264/1024 [MB] (21 MBps) [2024-11-27T00:54:56.088Z] Copying: 275/1024 [MB] (10 MBps) [2024-11-27T00:54:57.169Z] Copying: 291/1024 [MB] (16 MBps) [2024-11-27T00:54:58.112Z] Copying: 309/1024 [MB] (17 MBps) [2024-11-27T00:54:59.054Z] Copying: 326/1024 [MB] (16 MBps) [2024-11-27T00:55:00.442Z] Copying: 343/1024 [MB] (17 MBps) [2024-11-27T00:55:01.384Z] Copying: 363/1024 [MB] (19 MBps) [2024-11-27T00:55:02.322Z] Copying: 382/1024 [MB] (19 MBps) [2024-11-27T00:55:03.262Z] Copying: 415/1024 [MB] (32 MBps) [2024-11-27T00:55:04.203Z] Copying: 431/1024 [MB] (15 MBps) [2024-11-27T00:55:05.145Z] Copying: 447/1024 [MB] (16 MBps) [2024-11-27T00:55:06.086Z] Copying: 470/1024 [MB] (22 MBps) [2024-11-27T00:55:07.469Z] Copying: 493/1024 [MB] (22 MBps) [2024-11-27T00:55:08.410Z] Copying: 507/1024 [MB] (13 MBps) [2024-11-27T00:55:09.351Z] Copying: 524/1024 [MB] (17 MBps) [2024-11-27T00:55:10.291Z] Copying: 540/1024 [MB] (15 MBps) [2024-11-27T00:55:11.233Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-27T00:55:12.174Z] Copying: 577/1024 [MB] (16 MBps) [2024-11-27T00:55:13.116Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-27T00:55:14.056Z] Copying: 613/1024 [MB] (20 MBps) [2024-11-27T00:55:15.441Z] Copying: 633/1024 [MB] (19 MBps) [2024-11-27T00:55:16.383Z] Copying: 658/1024 [MB] (25 MBps) [2024-11-27T00:55:17.324Z] Copying: 675/1024 [MB] (17 MBps) [2024-11-27T00:55:18.266Z] Copying: 695/1024 [MB] (20 MBps) [2024-11-27T00:55:19.211Z] Copying: 720/1024 [MB] (25 MBps) [2024-11-27T00:55:20.154Z] Copying: 734/1024 [MB] (13 MBps) [2024-11-27T00:55:21.098Z] Copying: 758/1024 [MB] (23 MBps) [2024-11-27T00:55:22.044Z] Copying: 795/1024 [MB] (36 MBps) [2024-11-27T00:55:23.433Z] Copying: 813/1024 [MB] (17 MBps) [2024-11-27T00:55:24.377Z] Copying: 830/1024 [MB] (17 MBps) [2024-11-27T00:55:25.321Z] Copying: 847/1024 [MB] (17 MBps) [2024-11-27T00:55:26.260Z] Copying: 863/1024 [MB] (15 MBps) [2024-11-27T00:55:27.203Z] Copying: 892/1024 [MB] (29 MBps) [2024-11-27T00:55:28.148Z] Copying: 910/1024 [MB] (17 MBps) [2024-11-27T00:55:29.091Z] Copying: 927/1024 [MB] (17 MBps) [2024-11-27T00:55:30.480Z] Copying: 961/1024 [MB] (33 MBps) [2024-11-27T00:55:31.053Z] Copying: 975/1024 [MB] (14 MBps) [2024-11-27T00:55:32.444Z] Copying: 993/1024 [MB] (17 MBps) [2024-11-27T00:55:33.016Z] Copying: 1021/1024 [MB] (28 MBps) [2024-11-27T00:55:33.016Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-27 00:55:32.896467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.229 [2024-11-27 00:55:32.896591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:56.229 [2024-11-27 00:55:32.896743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:56.229 [2024-11-27 00:55:32.896766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.229 [2024-11-27 00:55:32.899005] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:56.229 [2024-11-27 00:55:32.900463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.229 [2024-11-27 00:55:32.900553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:56.229 [2024-11-27 00:55:32.900606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:32:56.229 [2024-11-27 00:55:32.900624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.229 [2024-11-27 00:55:32.908703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.229 [2024-11-27 00:55:32.908806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:56.229 [2024-11-27 00:55:32.908851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.418 ms 00:32:56.229 [2024-11-27 00:55:32.908878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.229 [2024-11-27 00:55:32.908912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.229 [2024-11-27 00:55:32.909017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:56.229 [2024-11-27 00:55:32.909043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:56.229 [2024-11-27 00:55:32.909058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.229 [2024-11-27 00:55:32.909108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.229 [2024-11-27 00:55:32.909126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:56.229 [2024-11-27 00:55:32.909142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:56.229 [2024-11-27 00:55:32.909188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.229 [2024-11-27 00:55:32.909211] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:56.229 [2024-11-27 00:55:32.909230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:32:56.229 [2024-11-27 00:55:32.909258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:56.229 [2024-11-27 00:55:32.909842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.909996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:56.230 [2024-11-27 00:55:32.910205] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:56.230 [2024-11-27 00:55:32.910211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d13e913d-e9f0-4de6-927a-adc927448444 00:32:56.230 [2024-11-27 00:55:32.910217] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:32:56.230 [2024-11-27 00:55:32.910223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126496 00:32:56.230 [2024-11-27 00:55:32.910229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:32:56.230 [2024-11-27 00:55:32.910236] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:32:56.230 [2024-11-27 00:55:32.910243] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:56.230 [2024-11-27 00:55:32.910249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:56.230 [2024-11-27 00:55:32.910255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:56.230 [2024-11-27 00:55:32.910260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:56.230 [2024-11-27 00:55:32.910265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:56.230 [2024-11-27 00:55:32.910271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.230 [2024-11-27 00:55:32.910277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:56.230 [2024-11-27 00:55:32.910283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:32:56.230 [2024-11-27 00:55:32.910288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.911583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.230 [2024-11-27 00:55:32.911606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:56.230 [2024-11-27 00:55:32.911618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:32:56.230 [2024-11-27 00:55:32.911624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.911706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.230 [2024-11-27 00:55:32.911720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:56.230 [2024-11-27 00:55:32.911727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:56.230 [2024-11-27 00:55:32.911732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.915828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.230 [2024-11-27 00:55:32.915870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:56.230 [2024-11-27 00:55:32.915878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.230 [2024-11-27 00:55:32.915884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.915923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.230 [2024-11-27 00:55:32.915929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:56.230 [2024-11-27 00:55:32.915940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.230 [2024-11-27 00:55:32.915945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.915968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.230 [2024-11-27 00:55:32.915976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:56.230 [2024-11-27 00:55:32.915981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.230 [2024-11-27 00:55:32.915987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.915998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.230 [2024-11-27 00:55:32.916004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:56.230 [2024-11-27 00:55:32.916010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.230 [2024-11-27 00:55:32.916016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.230 [2024-11-27 00:55:32.923457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.230 [2024-11-27 00:55:32.923492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:56.230 [2024-11-27 00:55:32.923499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.923505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:56.231 [2024-11-27 00:55:32.930142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:56.231 [2024-11-27 00:55:32.930198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:56.231 [2024-11-27 00:55:32.930239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:56.231 [2024-11-27 00:55:32.930301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:56.231 [2024-11-27 00:55:32.930341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:56.231 [2024-11-27 00:55:32.930385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.231 [2024-11-27 00:55:32.930431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:56.231 [2024-11-27 00:55:32.930436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.231 [2024-11-27 00:55:32.930442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.231 [2024-11-27 00:55:32.930538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 35.826 ms, result 0 00:32:57.175 00:32:57.175 00:32:57.176 00:55:33 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:57.176 [2024-11-27 00:55:33.940435] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:32:57.176 [2024-11-27 00:55:33.940579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96685 ] 00:32:57.437 [2024-11-27 00:55:34.099523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.438 [2024-11-27 00:55:34.116830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.438 [2024-11-27 00:55:34.198851] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:57.438 [2024-11-27 00:55:34.198920] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:57.701 [2024-11-27 00:55:34.345792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.345830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:57.701 [2024-11-27 00:55:34.345840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:57.701 [2024-11-27 00:55:34.345846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.345891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.345900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:57.701 [2024-11-27 00:55:34.345906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:57.701 [2024-11-27 00:55:34.345912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.345926] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:57.701 [2024-11-27 00:55:34.346094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:57.701 [2024-11-27 00:55:34.346104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:57.701 [2024-11-27 00:55:34.346120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:32:57.701 [2024-11-27 00:55:34.346125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346295] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:57.701 [2024-11-27 00:55:34.346321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:57.701 [2024-11-27 00:55:34.346335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:57.701 [2024-11-27 00:55:34.346342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:57.701 [2024-11-27 00:55:34.346421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:57.701 [2024-11-27 00:55:34.346426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:57.701 [2024-11-27 00:55:34.346630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:32:57.701 [2024-11-27 00:55:34.346638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:57.701 [2024-11-27 00:55:34.346709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:57.701 [2024-11-27 00:55:34.346717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.346738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:57.701 [2024-11-27 00:55:34.346745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:57.701 [2024-11-27 00:55:34.346750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.346765] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:57.701 [2024-11-27 00:55:34.347994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.348014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:57.701 [2024-11-27 00:55:34.348021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:32:57.701 [2024-11-27 00:55:34.348026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.348051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.701 [2024-11-27 00:55:34.348057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:57.701 [2024-11-27 00:55:34.348064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:57.701 [2024-11-27 00:55:34.348074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.701 [2024-11-27 00:55:34.348091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:57.701 [2024-11-27 00:55:34.348106] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:57.701 [2024-11-27 00:55:34.348133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:57.701 [2024-11-27 00:55:34.348143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:57.702 [2024-11-27 00:55:34.348221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:57.702 [2024-11-27 00:55:34.348232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:57.702 [2024-11-27 00:55:34.348240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:57.702 [2024-11-27 00:55:34.348248] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348256] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348262] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:57.702 [2024-11-27 00:55:34.348268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:57.702 [2024-11-27 00:55:34.348273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:57.702 [2024-11-27 00:55:34.348279] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:57.702 [2024-11-27 00:55:34.348287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.702 [2024-11-27 00:55:34.348293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:57.702 [2024-11-27 00:55:34.348299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:32:57.702 [2024-11-27 00:55:34.348307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.702 [2024-11-27 00:55:34.348372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.702 [2024-11-27 00:55:34.348380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:57.702 [2024-11-27 00:55:34.348386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:57.702 [2024-11-27 00:55:34.348391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.702 [2024-11-27 00:55:34.348465] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:57.702 [2024-11-27 00:55:34.348479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:57.702 [2024-11-27 00:55:34.348489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:57.702 [2024-11-27 00:55:34.348510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:57.702 [2024-11-27 00:55:34.348526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:57.702 [2024-11-27 00:55:34.348536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:57.702 [2024-11-27 00:55:34.348544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:57.702 [2024-11-27 00:55:34.348549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:57.702 [2024-11-27 00:55:34.348555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:57.702 [2024-11-27 00:55:34.348560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:57.702 [2024-11-27 00:55:34.348564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:57.702 [2024-11-27 00:55:34.348575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:57.702 [2024-11-27 00:55:34.348589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:57.702 [2024-11-27 00:55:34.348604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:57.702 [2024-11-27 00:55:34.348620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:57.702 [2024-11-27 00:55:34.348636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:57.702 [2024-11-27 00:55:34.348653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:57.702 [2024-11-27 00:55:34.348664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:57.702 [2024-11-27 00:55:34.348670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:57.702 [2024-11-27 00:55:34.348675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:57.702 [2024-11-27 00:55:34.348681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:57.702 [2024-11-27 00:55:34.348687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:57.702 [2024-11-27 00:55:34.348693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:57.702 [2024-11-27 00:55:34.348704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:57.702 [2024-11-27 00:55:34.348710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348717] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:57.702 [2024-11-27 00:55:34.348724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:57.702 [2024-11-27 00:55:34.348730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:57.702 [2024-11-27 00:55:34.348745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:57.702 [2024-11-27 00:55:34.348751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:57.702 [2024-11-27 00:55:34.348756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:57.702 [2024-11-27 00:55:34.348762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:57.702 [2024-11-27 00:55:34.348768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:57.702 [2024-11-27 00:55:34.348774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:57.702 [2024-11-27 00:55:34.348780] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:57.702 [2024-11-27 00:55:34.348788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.702 [2024-11-27 00:55:34.348795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:57.702 [2024-11-27 00:55:34.348801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:57.702 [2024-11-27 00:55:34.348807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:57.702 [2024-11-27 00:55:34.348812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:57.702 [2024-11-27 00:55:34.348820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:57.702 [2024-11-27 00:55:34.348826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:57.702 [2024-11-27 00:55:34.348833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:57.702 [2024-11-27 00:55:34.348839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:57.702 [2024-11-27 00:55:34.348845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:57.702 [2024-11-27 00:55:34.348862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:57.703 [2024-11-27 00:55:34.348893] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:57.703 [2024-11-27 00:55:34.348900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:57.703 [2024-11-27 00:55:34.348915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:57.703 [2024-11-27 00:55:34.348922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:57.703 [2024-11-27 00:55:34.348928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:57.703 [2024-11-27 00:55:34.348936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.348943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:57.703 [2024-11-27 00:55:34.348950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:32:57.703 [2024-11-27 00:55:34.348956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.354351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.354375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:57.703 [2024-11-27 00:55:34.354383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.365 ms 00:32:57.703 [2024-11-27 00:55:34.354388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.354446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.354453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:57.703 [2024-11-27 00:55:34.354460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:57.703 [2024-11-27 00:55:34.354465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.370320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.370359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:57.703 [2024-11-27 00:55:34.370370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.817 ms 00:32:57.703 [2024-11-27 00:55:34.370378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.370406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.370415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:57.703 [2024-11-27 00:55:34.370423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:57.703 [2024-11-27 00:55:34.370429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.370543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.370557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:57.703 [2024-11-27 00:55:34.370569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:57.703 [2024-11-27 00:55:34.370577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.370705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.370713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:57.703 [2024-11-27 00:55:34.370724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:57.703 [2024-11-27 00:55:34.370731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.375663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.375698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:57.703 [2024-11-27 00:55:34.375718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:32:57.703 [2024-11-27 00:55:34.375726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.375827] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:57.703 [2024-11-27 00:55:34.375847] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:57.703 [2024-11-27 00:55:34.375880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.375889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:57.703 [2024-11-27 00:55:34.375898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:57.703 [2024-11-27 00:55:34.375908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.388311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.388337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:57.703 [2024-11-27 00:55:34.388346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.388 ms 00:32:57.703 [2024-11-27 00:55:34.388352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.388440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.388449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:57.703 [2024-11-27 00:55:34.388458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:57.703 [2024-11-27 00:55:34.388466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.388501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.388516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:57.703 [2024-11-27 00:55:34.388522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:57.703 [2024-11-27 00:55:34.388527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.388747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.388762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:57.703 [2024-11-27 00:55:34.388768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:32:57.703 [2024-11-27 00:55:34.388773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.388784] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:57.703 [2024-11-27 00:55:34.388791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.388799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:57.703 [2024-11-27 00:55:34.388807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:57.703 [2024-11-27 00:55:34.388812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.394989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:57.703 [2024-11-27 00:55:34.395084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.395091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:57.703 [2024-11-27 00:55:34.395098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:32:57.703 [2024-11-27 00:55:34.395150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.396912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.396933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:57.703 [2024-11-27 00:55:34.396940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:32:57.703 [2024-11-27 00:55:34.396950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.396986] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:57.703 [2024-11-27 00:55:34.397415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.397432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:57.703 [2024-11-27 00:55:34.397440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:32:57.703 [2024-11-27 00:55:34.397446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.397463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.397472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:57.703 [2024-11-27 00:55:34.397477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:57.703 [2024-11-27 00:55:34.397483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.397505] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:57.703 [2024-11-27 00:55:34.397516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.397521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:57.703 [2024-11-27 00:55:34.397527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:57.703 [2024-11-27 00:55:34.397535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.400863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.400890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:57.703 [2024-11-27 00:55:34.400903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:32:57.703 [2024-11-27 00:55:34.400909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.400959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:57.703 [2024-11-27 00:55:34.400966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:57.703 [2024-11-27 00:55:34.400972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:57.703 [2024-11-27 00:55:34.400978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:57.703 [2024-11-27 00:55:34.401663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 55.587 ms, result 0 00:32:59.092  [2024-11-27T00:55:36.825Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-27T00:55:37.770Z] Copying: 39/1024 [MB] (19 MBps) [2024-11-27T00:55:38.732Z] Copying: 61/1024 [MB] (21 MBps) [2024-11-27T00:55:39.778Z] Copying: 78/1024 [MB] (16 MBps) [2024-11-27T00:55:40.720Z] Copying: 98/1024 [MB] (20 MBps) [2024-11-27T00:55:41.664Z] Copying: 113/1024 [MB] (14 MBps) [2024-11-27T00:55:42.608Z] Copying: 127/1024 [MB] (13 MBps) [2024-11-27T00:55:43.552Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-27T00:55:44.940Z] Copying: 153/1024 [MB] (12 MBps) [2024-11-27T00:55:45.885Z] Copying: 169/1024 [MB] (16 MBps) [2024-11-27T00:55:46.829Z] Copying: 182/1024 [MB] (13 MBps) [2024-11-27T00:55:47.773Z] Copying: 197/1024 [MB] (15 MBps) [2024-11-27T00:55:48.718Z] Copying: 210/1024 [MB] (13 MBps) [2024-11-27T00:55:49.662Z] Copying: 228/1024 [MB] (17 MBps) [2024-11-27T00:55:50.606Z] Copying: 245/1024 [MB] (17 MBps) [2024-11-27T00:55:51.550Z] Copying: 257/1024 [MB] (12 MBps) [2024-11-27T00:55:52.937Z] Copying: 276/1024 [MB] (18 MBps) [2024-11-27T00:55:53.881Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-27T00:55:54.825Z] Copying: 305/1024 [MB] (18 MBps) [2024-11-27T00:55:55.769Z] Copying: 319/1024 [MB] (13 MBps) [2024-11-27T00:55:56.713Z] Copying: 335/1024 [MB] (16 MBps) [2024-11-27T00:55:57.659Z] Copying: 348/1024 [MB] (12 MBps) [2024-11-27T00:55:58.604Z] Copying: 370/1024 [MB] (21 MBps) [2024-11-27T00:55:59.547Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-27T00:56:00.934Z] Copying: 397/1024 [MB] (16 MBps) [2024-11-27T00:56:01.879Z] Copying: 413/1024 [MB] (16 MBps) [2024-11-27T00:56:02.824Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-27T00:56:03.770Z] Copying: 440/1024 [MB] (14 MBps) [2024-11-27T00:56:04.715Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-27T00:56:05.661Z] Copying: 463/1024 [MB] (11 MBps) [2024-11-27T00:56:06.605Z] Copying: 481/1024 [MB] (18 MBps) [2024-11-27T00:56:07.548Z] Copying: 504/1024 [MB] (23 MBps) [2024-11-27T00:56:08.933Z] Copying: 521/1024 [MB] (16 MBps) [2024-11-27T00:56:09.873Z] Copying: 534/1024 [MB] (12 MBps) [2024-11-27T00:56:10.869Z] Copying: 547/1024 [MB] (13 MBps) [2024-11-27T00:56:11.813Z] Copying: 561/1024 [MB] (14 MBps) [2024-11-27T00:56:12.757Z] Copying: 581/1024 [MB] (19 MBps) [2024-11-27T00:56:13.701Z] Copying: 607/1024 [MB] (25 MBps) [2024-11-27T00:56:14.645Z] Copying: 619/1024 [MB] (12 MBps) [2024-11-27T00:56:15.590Z] Copying: 635/1024 [MB] (16 MBps) [2024-11-27T00:56:16.979Z] Copying: 659/1024 [MB] (24 MBps) [2024-11-27T00:56:17.553Z] Copying: 678/1024 [MB] (18 MBps) [2024-11-27T00:56:18.941Z] Copying: 695/1024 [MB] (16 MBps) [2024-11-27T00:56:19.883Z] Copying: 707/1024 [MB] (12 MBps) [2024-11-27T00:56:20.828Z] Copying: 727/1024 [MB] (19 MBps) [2024-11-27T00:56:21.772Z] Copying: 743/1024 [MB] (16 MBps) [2024-11-27T00:56:22.714Z] Copying: 759/1024 [MB] (16 MBps) [2024-11-27T00:56:23.658Z] Copying: 776/1024 [MB] (17 MBps) [2024-11-27T00:56:24.601Z] Copying: 797/1024 [MB] (20 MBps) [2024-11-27T00:56:25.546Z] Copying: 817/1024 [MB] (20 MBps) [2024-11-27T00:56:26.930Z] Copying: 836/1024 [MB] (19 MBps) [2024-11-27T00:56:27.873Z] Copying: 858/1024 [MB] (21 MBps) [2024-11-27T00:56:28.817Z] Copying: 877/1024 [MB] (19 MBps) [2024-11-27T00:56:29.759Z] Copying: 893/1024 [MB] (15 MBps) [2024-11-27T00:56:30.704Z] Copying: 910/1024 [MB] (16 MBps) [2024-11-27T00:56:31.649Z] Copying: 925/1024 [MB] (14 MBps) [2024-11-27T00:56:32.592Z] Copying: 945/1024 [MB] (19 MBps) [2024-11-27T00:56:33.980Z] Copying: 965/1024 [MB] (20 MBps) [2024-11-27T00:56:34.553Z] Copying: 976/1024 [MB] (10 MBps) [2024-11-27T00:56:35.943Z] Copying: 995/1024 [MB] (18 MBps) [2024-11-27T00:56:36.516Z] Copying: 1013/1024 [MB] (17 MBps) [2024-11-27T00:56:37.090Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 00:56:36.931942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.303 [2024-11-27 00:56:36.932078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:00.303 [2024-11-27 00:56:36.932094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:00.303 [2024-11-27 00:56:36.932103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.303 [2024-11-27 00:56:36.932129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:00.303 [2024-11-27 00:56:36.932784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.303 [2024-11-27 00:56:36.932825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:00.303 [2024-11-27 00:56:36.932838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:34:00.303 [2024-11-27 00:56:36.932873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.303 [2024-11-27 00:56:36.933115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.303 [2024-11-27 00:56:36.933128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:00.303 [2024-11-27 00:56:36.933139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:34:00.303 [2024-11-27 00:56:36.933154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.303 [2024-11-27 00:56:36.933185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.303 [2024-11-27 00:56:36.933200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:00.303 [2024-11-27 00:56:36.933209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:00.303 [2024-11-27 00:56:36.933218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.303 [2024-11-27 00:56:36.933299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.303 [2024-11-27 00:56:36.933310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:00.303 [2024-11-27 00:56:36.933322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:00.303 [2024-11-27 00:56:36.933332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.303 [2024-11-27 00:56:36.933346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:00.303 [2024-11-27 00:56:36.933364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:00.303 [2024-11-27 00:56:36.933374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:00.303 [2024-11-27 00:56:36.933702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.933992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:00.304 [2024-11-27 00:56:36.934231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:00.304 [2024-11-27 00:56:36.934239] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d13e913d-e9f0-4de6-927a-adc927448444 00:34:00.304 [2024-11-27 00:56:36.934248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:00.304 [2024-11-27 00:56:36.934255] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4640 00:34:00.304 [2024-11-27 00:56:36.934263] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:34:00.304 [2024-11-27 00:56:36.934275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0069 00:34:00.304 [2024-11-27 00:56:36.934282] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:00.304 [2024-11-27 00:56:36.934290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:00.304 [2024-11-27 00:56:36.934299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:00.304 [2024-11-27 00:56:36.934305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:00.304 [2024-11-27 00:56:36.934312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:00.304 [2024-11-27 00:56:36.934320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.304 [2024-11-27 00:56:36.934328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:00.304 [2024-11-27 00:56:36.934336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:34:00.304 [2024-11-27 00:56:36.934343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.936829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.304 [2024-11-27 00:56:36.936887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:00.304 [2024-11-27 00:56:36.936899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.468 ms 00:34:00.304 [2024-11-27 00:56:36.936907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.937025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:00.304 [2024-11-27 00:56:36.937041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:00.304 [2024-11-27 00:56:36.937051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:34:00.304 [2024-11-27 00:56:36.937060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.944983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.304 [2024-11-27 00:56:36.945019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:00.304 [2024-11-27 00:56:36.945029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.304 [2024-11-27 00:56:36.945038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.945103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.304 [2024-11-27 00:56:36.945122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:00.304 [2024-11-27 00:56:36.945131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.304 [2024-11-27 00:56:36.945138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.945197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.304 [2024-11-27 00:56:36.945213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:00.304 [2024-11-27 00:56:36.945222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.304 [2024-11-27 00:56:36.945230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.945258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.304 [2024-11-27 00:56:36.945268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:00.304 [2024-11-27 00:56:36.945277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.304 [2024-11-27 00:56:36.945285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.304 [2024-11-27 00:56:36.959080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.959129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:00.305 [2024-11-27 00:56:36.959141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.959156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.972598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:00.305 [2024-11-27 00:56:36.972610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.972629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.972695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:00.305 [2024-11-27 00:56:36.972707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.972716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.972763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:00.305 [2024-11-27 00:56:36.972771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.972788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.972878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:00.305 [2024-11-27 00:56:36.972887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.972898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.972937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:00.305 [2024-11-27 00:56:36.972945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.972954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.972995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.973004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:00.305 [2024-11-27 00:56:36.973018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.973029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.973077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:00.305 [2024-11-27 00:56:36.973088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:00.305 [2024-11-27 00:56:36.973097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:00.305 [2024-11-27 00:56:36.973105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:00.305 [2024-11-27 00:56:36.973241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.261 ms, result 0 00:34:00.565 00:34:00.565 00:34:00.565 00:56:37 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:02.480 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:02.480 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:02.480 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:02.480 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94722 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94722 ']' 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94722 00:34:02.741 Process with pid 94722 is not found 00:34:02.741 Remove shared memory files 00:34:02.741 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94722) - No such process 00:34:02.741 00:56:39 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94722 is not found' 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_band_md /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_l2p_l1 /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_l2p_l2 /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_l2p_l2_ctx /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_nvc_md /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_p2l_pool /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_sb /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_sb_shm /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_trim_bitmap /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_trim_log /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_trim_md /dev/hugepages/ftl_d13e913d-e9f0-4de6-927a-adc927448444_vmap 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:02.742 00:34:02.742 real 4m16.350s 00:34:02.742 user 4m4.940s 00:34:02.742 sys 0m11.302s 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.742 ************************************ 00:34:02.742 END TEST ftl_restore_fast 00:34:02.742 ************************************ 00:34:02.742 00:56:39 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@14 -- # killprocess 86472 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@954 -- # '[' -z 86472 ']' 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@958 -- # kill -0 86472 00:34:02.742 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86472) - No such process 00:34:02.742 Process with pid 86472 is not found 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86472 is not found' 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97359 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97359 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@835 -- # '[' -z 97359 ']' 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.742 00:56:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 00:56:39 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:02.742 [2024-11-27 00:56:39.491198] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 23.11.0 initialization... 00:34:02.742 [2024-11-27 00:56:39.491362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97359 ] 00:34:03.003 [2024-11-27 00:56:39.655965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.003 [2024-11-27 00:56:39.684216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.577 00:56:40 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:03.577 00:56:40 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:03.577 00:56:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:04.150 nvme0n1 00:34:04.150 00:56:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:04.150 00:56:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:04.150 00:56:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:04.150 00:56:40 ftl -- ftl/common.sh@28 -- # stores=25c61368-91ef-489f-bf8d-b54877d076fb 00:34:04.150 00:56:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:04.150 00:56:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25c61368-91ef-489f-bf8d-b54877d076fb 00:34:04.412 00:56:41 ftl -- ftl/ftl.sh@23 -- # killprocess 97359 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 97359 ']' 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@958 -- # kill -0 97359 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@959 -- # uname 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97359 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97359' 00:34:04.412 killing process with pid 97359 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@973 -- # kill 97359 00:34:04.412 00:56:41 ftl -- common/autotest_common.sh@978 -- # wait 97359 00:34:04.674 00:56:41 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:04.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:04.935 Waiting for block devices as requested 00:34:05.196 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:05.197 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:05.197 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:05.457 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:10.752 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:10.752 00:56:47 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:10.752 00:56:47 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:10.752 Remove shared memory files 00:34:10.752 00:56:47 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:10.752 00:56:47 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:10.752 00:56:47 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:10.752 00:56:47 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:10.752 00:56:47 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:10.752 00:34:10.752 real 16m13.432s 00:34:10.752 user 18m4.412s 00:34:10.752 sys 1m24.137s 00:34:10.752 00:56:47 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.752 ************************************ 00:34:10.752 END TEST ftl 00:34:10.752 ************************************ 00:34:10.752 00:56:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:10.752 00:56:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:10.752 00:56:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:10.752 00:56:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:10.752 00:56:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:10.752 00:56:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:10.752 00:56:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:10.752 00:56:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:10.752 00:56:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:10.752 00:56:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:10.752 00:56:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:10.752 00:56:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.752 00:56:47 -- common/autotest_common.sh@10 -- # set +x 00:34:10.752 00:56:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:10.752 00:56:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:10.752 00:56:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:10.752 00:56:47 -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 INFO: APP EXITING 00:34:12.202 INFO: killing all VMs 00:34:12.202 INFO: killing vhost app 00:34:12.202 INFO: EXIT DONE 00:34:12.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:12.732 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:12.732 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:12.732 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:12.732 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:13.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:13.563 Cleaning 00:34:13.563 Removing: /var/run/dpdk/spdk0/config 00:34:13.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:13.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:13.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:13.563 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:13.563 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:13.563 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:13.563 Removing: /var/run/dpdk/spdk0 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69391 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69555 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69756 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69838 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69872 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69978 00:34:13.563 Removing: /var/run/dpdk/spdk_pid69996 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70174 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70252 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70332 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70426 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70507 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70546 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70577 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70648 00:34:13.563 Removing: /var/run/dpdk/spdk_pid70743 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71168 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71210 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71262 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71278 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71336 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71352 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71410 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71425 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71468 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71486 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71528 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71546 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71673 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71704 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71793 00:34:13.563 Removing: /var/run/dpdk/spdk_pid71954 00:34:13.563 Removing: /var/run/dpdk/spdk_pid72016 00:34:13.563 Removing: /var/run/dpdk/spdk_pid72047 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72467 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72559 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72659 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72701 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72721 00:34:13.824 Removing: /var/run/dpdk/spdk_pid72799 00:34:13.824 Removing: /var/run/dpdk/spdk_pid73411 00:34:13.824 Removing: /var/run/dpdk/spdk_pid73442 00:34:13.824 Removing: /var/run/dpdk/spdk_pid73889 00:34:13.824 Removing: /var/run/dpdk/spdk_pid73971 00:34:13.824 Removing: /var/run/dpdk/spdk_pid74080 00:34:13.824 Removing: /var/run/dpdk/spdk_pid74122 00:34:13.824 Removing: /var/run/dpdk/spdk_pid74142 00:34:13.824 Removing: /var/run/dpdk/spdk_pid74162 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76006 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76127 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76131 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76143 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76187 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76191 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76203 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76248 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76252 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76264 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76310 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76314 00:34:13.824 Removing: /var/run/dpdk/spdk_pid76326 00:34:13.824 Removing: /var/run/dpdk/spdk_pid77712 00:34:13.824 Removing: /var/run/dpdk/spdk_pid77798 00:34:13.824 Removing: /var/run/dpdk/spdk_pid79191 00:34:13.824 Removing: /var/run/dpdk/spdk_pid80942 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81000 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81064 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81169 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81250 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81341 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81393 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81463 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81561 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81642 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81732 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81790 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81854 00:34:13.824 Removing: /var/run/dpdk/spdk_pid81947 00:34:13.824 Removing: /var/run/dpdk/spdk_pid82033 00:34:13.824 Removing: /var/run/dpdk/spdk_pid82118 00:34:13.824 Removing: /var/run/dpdk/spdk_pid82181 00:34:13.824 Removing: /var/run/dpdk/spdk_pid82245 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82345 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82431 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82521 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82577 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82640 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82711 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82774 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82872 00:34:13.825 Removing: /var/run/dpdk/spdk_pid82956 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83041 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83093 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83162 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83231 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83296 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83393 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83473 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83611 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83873 00:34:13.825 Removing: /var/run/dpdk/spdk_pid83904 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84352 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84525 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84614 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84718 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84761 00:34:13.825 Removing: /var/run/dpdk/spdk_pid84787 00:34:13.825 Removing: /var/run/dpdk/spdk_pid85073 00:34:13.825 Removing: /var/run/dpdk/spdk_pid85111 00:34:13.825 Removing: /var/run/dpdk/spdk_pid85163 00:34:13.825 Removing: /var/run/dpdk/spdk_pid85531 00:34:13.825 Removing: /var/run/dpdk/spdk_pid85676 00:34:13.825 Removing: /var/run/dpdk/spdk_pid86472 00:34:13.825 Removing: /var/run/dpdk/spdk_pid86588 00:34:13.825 Removing: /var/run/dpdk/spdk_pid86748 00:34:13.825 Removing: /var/run/dpdk/spdk_pid86840 00:34:13.825 Removing: /var/run/dpdk/spdk_pid87115 00:34:13.825 Removing: /var/run/dpdk/spdk_pid87368 00:34:13.825 Removing: /var/run/dpdk/spdk_pid87709 00:34:13.825 Removing: /var/run/dpdk/spdk_pid87869 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88021 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88057 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88230 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88244 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88280 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88539 00:34:13.825 Removing: /var/run/dpdk/spdk_pid88758 00:34:13.825 Removing: /var/run/dpdk/spdk_pid89308 00:34:13.825 Removing: /var/run/dpdk/spdk_pid90001 00:34:13.825 Removing: /var/run/dpdk/spdk_pid90580 00:34:13.825 Removing: /var/run/dpdk/spdk_pid91267 00:34:13.825 Removing: /var/run/dpdk/spdk_pid91411 00:34:14.085 Removing: /var/run/dpdk/spdk_pid91495 00:34:14.085 Removing: /var/run/dpdk/spdk_pid92019 00:34:14.085 Removing: /var/run/dpdk/spdk_pid92082 00:34:14.085 Removing: /var/run/dpdk/spdk_pid92635 00:34:14.085 Removing: /var/run/dpdk/spdk_pid93054 00:34:14.085 Removing: /var/run/dpdk/spdk_pid93775 00:34:14.086 Removing: /var/run/dpdk/spdk_pid93896 00:34:14.086 Removing: /var/run/dpdk/spdk_pid93932 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94002 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94056 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94120 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94289 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94359 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94419 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94465 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94504 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94567 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94722 00:34:14.086 Removing: /var/run/dpdk/spdk_pid94932 00:34:14.086 Removing: /var/run/dpdk/spdk_pid95489 00:34:14.086 Removing: /var/run/dpdk/spdk_pid96109 00:34:14.086 Removing: /var/run/dpdk/spdk_pid96685 00:34:14.086 Removing: /var/run/dpdk/spdk_pid97359 00:34:14.086 Clean 00:34:14.086 00:56:50 -- common/autotest_common.sh@1453 -- # return 0 00:34:14.086 00:56:50 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:14.086 00:56:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:14.086 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:34:14.086 00:56:50 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:14.086 00:56:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:14.086 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:34:14.086 00:56:50 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:14.086 00:56:50 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:14.086 00:56:50 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:14.086 00:56:50 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:14.086 00:56:50 -- spdk/autotest.sh@398 -- # hostname 00:34:14.086 00:56:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:14.347 geninfo: WARNING: invalid characters removed from testname! 00:34:40.927 00:57:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:42.835 00:57:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:45.378 00:57:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:47.921 00:57:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:50.465 00:57:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:53.006 00:57:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:55.548 00:57:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:55.548 00:57:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:55.548 00:57:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:55.548 00:57:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:55.548 00:57:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:55.548 00:57:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:55.548 + [[ -n 5761 ]] 00:34:55.548 + sudo kill 5761 00:34:55.819 [Pipeline] } 00:34:55.833 [Pipeline] // timeout 00:34:55.838 [Pipeline] } 00:34:55.852 [Pipeline] // stage 00:34:55.857 [Pipeline] } 00:34:55.871 [Pipeline] // catchError 00:34:55.881 [Pipeline] stage 00:34:55.884 [Pipeline] { (Stop VM) 00:34:55.896 [Pipeline] sh 00:34:56.181 + vagrant halt 00:34:58.728 ==> default: Halting domain... 00:35:05.328 [Pipeline] sh 00:35:05.613 + vagrant destroy -f 00:35:08.186 ==> default: Removing domain... 00:35:08.774 [Pipeline] sh 00:35:09.056 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:09.067 [Pipeline] } 00:35:09.082 [Pipeline] // stage 00:35:09.088 [Pipeline] } 00:35:09.102 [Pipeline] // dir 00:35:09.109 [Pipeline] } 00:35:09.123 [Pipeline] // wrap 00:35:09.129 [Pipeline] } 00:35:09.142 [Pipeline] // catchError 00:35:09.152 [Pipeline] stage 00:35:09.154 [Pipeline] { (Epilogue) 00:35:09.167 [Pipeline] sh 00:35:09.503 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:14.791 [Pipeline] catchError 00:35:14.793 [Pipeline] { 00:35:14.803 [Pipeline] sh 00:35:15.086 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:15.086 Artifacts sizes are good 00:35:15.097 [Pipeline] } 00:35:15.112 [Pipeline] // catchError 00:35:15.123 [Pipeline] archiveArtifacts 00:35:15.131 Archiving artifacts 00:35:15.224 [Pipeline] cleanWs 00:35:15.237 [WS-CLEANUP] Deleting project workspace... 00:35:15.237 [WS-CLEANUP] Deferred wipeout is used... 00:35:15.244 [WS-CLEANUP] done 00:35:15.246 [Pipeline] } 00:35:15.261 [Pipeline] // stage 00:35:15.266 [Pipeline] } 00:35:15.280 [Pipeline] // node 00:35:15.286 [Pipeline] End of Pipeline 00:35:15.327 Finished: SUCCESS